Search Results

Search found 14249 results on 570 pages for 'peoplesoft services procurement'.

Page 529/570 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • ?????????????Fusion Middleware??????????? ?4?

    - by rika.tokumichi
    ??????????OTN????????? ??OTN???????Fusion Middleware???????????????????????????????????????????????? ?OTN????????????????????????????? 2010?3??????????????????? ???????????? Oracle Direct Seminar?????????? ???! ????? ?????????????? OTN???????????????????! ???Fusion Middleware????????????DB?????????????????????????????? ???????????????????? ···?????????????????5???? ????????????????????????????????????? ????????Oracle WebLogic Server? ???????????????! ???? ?Oracle Direct Seminar??????????? ????????? ?????????????????????????????? Oracle Direct Seminar??????????????????? ?????????????????????????????????????? -------------------------------------------- ????????????????????????????Oracle Direct Seminar(????)? ???????????????????????????????????????????????????????? ???????????????????????????????????? ??????????????????????????????????????????? ????????????????/?????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? ?????????????(????)? ?????????????????????????????? ??????????????????????????????????? ?????????????????????????????????? ???????????????????!?DB?Upgrade???????? ?????????????????????????? ???????????????????????????? ??????????????????????????????????????????????????? Oralce Database????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????? ?????????????(????)? ????????????????????? ?????????????????????????????????????????????? ?????????(??????????)? ??????????????????????(??????????)? ????????????????????????????????????????????????????? ??????????????????????·??????????????????? ??????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????OTN???????????????????????????????? ???????????????????????????????????????????????? -------------------------------------------- ????????????????????????????????????????! ???????????????! ?????????????????????????????! >Oracle Direct Seminar?????????? >?????????!????????????OTN???? ????????! ?????????????????????????????????? ????????????????????????????????????????? ???????????????OTN??????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????? ?Developers Summit????????????????????2?????????·????????? ???? ?????????????????? ???? ??(?????????? ????????????????????)?? ?????? ??(Fusion Middleware?????? Fusion Middleware???????? ??????????????) ??? ????????????????????????????^^ >???????????????????1(?????? ???) >???????????????????2(???????) ???4???????????????????????????????????????????????? ?Oracle VM?Oracle WebLogic Server????????? ??????????????????????????????????!! ???????????? ????????????2????????????????????? ?????????????????? ???OTN?????????????????????????????? ????????????????!! ???????????????????????! >????????Oracle Customer Successes? ?????4??????????50??(???)?????????????????? ?????????????????(by Applications)?????????(by Technology)???(by Industry)???????(by Services)?????????? ??????????????????????????????? ????????!????????????????????! ?????????????????????Fusion Middleware??????????? ?4??????????? ??????????????????????! Fusion Middleware???????????! ?INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition???????! ??1?OTN???????Fusion Middleware???????????????? ???????????????????????????????????????????????? ???????????????????????!! ?????????? Oracle/OTN????????????????????????????????????????????????? ????????????????????????????????????? INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition??????????????????????? >???????????????????????? ??????????????????????Oracle's Dev2DBA Newsletter?????????????^^ >?Oracle's Dev2DBA Newsletter?????????????????????????????????

    Read the article

  • SBS 2008 Backup Drive Full - Error Code '2147942512'

    - by HK1
    We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future. How I can tell that the drive is getting full: In the event viewer under Windows Logs Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs Microsoft Windows Backup Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'. One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation: auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup. In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once. The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date. In order to resolve this problem for now, I did the following: Assign the backup drive a disk letter under disk management. Run the command line with Administrative rights. diskshadow.exe [enter] delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive) I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive. However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • SharePoint threw "Unknown SQL Exception 206 occured." Anyone familiar with this?

    - by dalehhirt
    Our SharePoint instance threw the following errors when attempting to access data through a Content Query Tool: 04/02/2010 10:45:06.12 w3wp.exe (0x062C) 0x1734 Windows SharePoint Services Database 5586 Critical Unknown SQL Exception 206 occured. Additional error information from SQL Server is included below. Operand type clash: uniqueidentifier is incompatible with datetime 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical A runtime exception was detected. Details follow. Message: Operand type clash: uniqueidentifier is incompatible with datetime Techinal Details: System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData(... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical ...) at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlC 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception (Watson Reporting Cancelled) System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteRead... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...er(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, ... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...CommandBehavior behavior) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean& bSucceed) at Microsoft.SharePoint.Library.SPRequestInternalClass.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.Library.SPRequest.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.SPWeb.GetSiteData(SPSiteDataQuery query) at Microsoft.SharePoint.Publishing.CachedArea.GetCrossListQuery(SPSiteDataQuery query, SPWeb currentContext) at Microsoft.SharePoint.Publishing.CrossListQueryCache.GetSiteData(CachedArea cachedArea, SPWeb web, SPSiteDataQuery qu... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...ery) 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning Error occured while processing a Content Query Web Part. Performing the following query ' 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning ...ue" Type="Number"/ The farm is MOSS 2007 with SQL Server 2005 backend. Any ideas are welcomed. Dale

    Read the article

  • Windows XP SP3 - Library not registered error... IE8 not installing.

    - by Wesley
    Specs to put things in context: AMD Athlon XP 2400+ @ 1987 MHz / 2 x 512MB PC3200 DDR RAM / WD 160GB IDE HDD / 3DFuzion 128MB GeForce 6200 AGP 4x / FIC AM37 / Windows XP SP3 Just recently, I was unable to start Windows Media Player. I clicked it and the busy cursor came up, but then nothing happened. Also, I tried doing a search for a file, and same thing. It would show busy cursor then suddenly stop doing anything. I couldn't find it in the Processes of the Task Manager. (Perhaps I don't know what I'm looking for?) Also, I was trying to update my DirectX, which has been running something older than DX9 9.0c for a while now, except the installation fails due to "internal error". I think the failed DirectX installation has been like that for a while... (I remember trying to install DX9 9.0c a while back, but still failed.) The Windows programs not starting, I don't think I've ever had before... what could be the cause of these problems?! Thanks in advance. =) EDIT1: Weird thing now is that when I try to open User Accounts, I get a message saying "Wrong number of arguments or invalid property assignment." Also, when I'm trying to open services.msc, I'm getting a script error that says that "Library not registered." (Code: 0, URL: res://C:\WINDOWS\System32\mmcndmgr.dll/views.htm) Perhaps this is related to my other question, where I seemed to have an unregistered library of some sort. EDIT2: The DirectX End-User Runtime Web Installer freakishly worked and updated successfully. Now, I have focussed the question on the bigger problem. For WMP11, H&S, and Search, I click it once, get a busy icon for a second, and then nothing happens. Refer to EDIT1 for other problems. EDIT3: Seems that my problems may be related to some Internet Explorer Script Errors. So what I did was download the IE8 installer from the Microsoft website, but when I run it and get to the main portion of the installer, it just keeps looping on the Downloading step of the installation. The installer is still running, but I left it for at least 4 hours and the downloading step was still not finished. What is the problem? Also, I uninstalled Ubuntu 9.10 after these problems, but they still remain. EDIT4: I'm getting an active desktop recovery background on startup now. And within seconds my computer hangs again. EDIT3 explains main issue though.

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • networking through openstack installed on vm

    - by Mandar Katdare
    I am trying to set up a test installation of Openstack on a Ubuntu 12.04 VM running on a ESXi server. So far I have been able to launch the VMs on the ESXi, however am unable to assign IP addresses to them. As the VM with the Openstack installation has a single public IP, I wish to assign IPs to the VMs create through Openstack so that they can directly interact with the public network itself without having a separate private network. So I feel that bridging would not be the correct option here. But am unable to find the correct documents to go ahead with such an install. My ifconfig looks as follows: eth0 Link encap:Ethernet HWaddr 00:0c:29:6f:8a:d7 inet addr:192.168.4.167 Bcast:192.168.4.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe6f:8ad7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:391640 errors:33 dropped:98 overruns:0 frame:0 TX packets:545044 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40303931 (40.3 MB) TX bytes:763127348 (763.1 MB) Interrupt:18 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:146127 errors:0 dropped:0 overruns:0 frame:0 TX packets:146127 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:799815763 (799.8 MB) TX bytes:799815763 (799.8 MB) virbr0 Link encap:Ethernet HWaddr 8a:80:33:32:63:a0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) The eth0 is the adapter that I intend to use for all communication. My nova.conf looks as follows: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=192.168.4.167 --ec2_host=192.168.4.167 --rabbit_host=192.168.4.167 --cc_host=192.168.4.167 --nova_url=http://192.168.4.167:8774/v1.1/ --routing_source_ip=192.168.4.167 --glance_api_servers=192.168.4.167:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.4 --sql_connection=mysql://novadbadmin:[email protected]/nova --ec2_url=http://192.168.4.167:8773/services/Cloud --keystone_ec2_url=http://192.168.4.167:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=kvm --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --vncproxy_url=http://192.168.4.167:6080 --vnc_console_proxy_url=http://192.168.4.167:6080 # network specific settings --network_manager=nova.network.manager.FlatDHCPManager --public_interface=eth0 --vmwareapi_host_ip=192.168.4.254 --vmwareapi_host_username=**** --vmwareapi_host_password=**** --vmwareapi_wsdl_loc=http://127.0.0.1:8080/wsdl/vim25/vimService.wsdl --fixed_range=192.168.4.190/24 --floating_range=192.168.4.190/24 --network_size=32 --flat_network_dhcp_start=192.168.4.190 --flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=vmwareapi --root_helper=sudo nova-rootwrap --verbose --libvirt_use_virtio_for_bridges --ec2_private_dns_show --novnc_enabled=true --novncproxy_base_url=http://192.168.4.167:6080/vnc_auto.html --vncserver_proxyclient_address=192.168.4.167 --vncserver_listen=192.168.4.167 192.168.4.167 is my VM with the Openstack installation and 192.168.4.254 is my ESXi server on which the VM runs. Can anyone advice me about how to proceed? Thanks, Mandar

    Read the article

  • Cisco Pix 501 - reaching local host limit, showing odd IP addresses

    - by cdonner
    I am running out of licenses on my Pix 501, and the show local-host command lists a number of odd IP addresses that do not belong to my 10.10.1.* subnet. Any idea what they are? The only thing I could find was a potential ISP: DINSA is Defence Interoperable Network Services Authority, an agency of the Ministry of Defence of the United Kingdom. Does not sound right. I don't see any active connections, though. I can't ping or traceroute these IPs, but they reappear after I clear the list, with various other addresses in that general range, up until the connection limit is reached. Based on the number denied, I suppose I would have a lot more of them had I not the connection limit. Very dubious. Is anybody else seeing this? pixfirewall# show local-host Interface inside: 10 active, 10 maximum active, **118 denied** local host: <10.10.1.110>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.176>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.170>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 1/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.175>, TCP connection count/limit = 11/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 1/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.108>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.33.41.115>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.33.226.124>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.172>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <25.36.114.91>, // ???????????????? what is this? TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): local host: <10.10.1.109>, TCP connection count/limit = 0/unlimited TCP embryonic count = 0 TCP intercept watermark = unlimited UDP connection count/limit = 0/unlimited AAA: Xlate(s): Conn(s): pixfirewall#

    Read the article

  • swapping or thrashing with vast amounts of unmapped pagecache

    - by Marco
    EDIT: I noticed that this is more appropriate for superuser.com, I apologize. I don't know how to delete this question. I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • Trouble connecting a Ubuntu system to IPv6 tunnel over NAT

    - by John Millikin
    I'm trying to set up an IPv6 tunnel, via Hurricane Electric's tunnel-broker service. I've configured my system using their example commands: # $ipv4a = tunnel server's IPv4 IP # $ipv4b = user's IPv4 IP # $ipv6a = tunnel server's side of point-to-point /64 allocation # $ipv6b = user's side of point-to-point /64 allocation ip tunnel add he-ipv6 mode sit remote $ipv4a local $ipv4b ttl 255 ip link set he-ipv6 up ip addr add $ipv6b dev he-ipv6 ip route add ::/0 dev he-ipv6 And have configured my desktop to be in my NAT router's DMZ. The router is running Tomato firmware. But I can't ping any IPv6 services: $ ping6 -I he-ipv6 '2001:470:1f04:454::1' PING 2001:470:1f04:454::1(2001:470:1f04:454::1) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes From 2001:470:1f04:454::2 icmp_seq=1 Destination unreachable: Address unreachable From 2001:470:1f04:454::2 icmp_seq=2 Destination unreachable: Address unreachable I can ping my local address: $ ping6 -I he-ipv6 '2001:470:1f04:454::2' PING 2001:470:1f04:454::2(2001:470:1f04:454::2) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes 64 bytes from 2001:470:1f04:454::2: icmp_seq=1 ttl=64 time=0.037 ms 64 bytes from 2001:470:1f04:454::2: icmp_seq=2 ttl=64 time=0.039 ms I don't know much about routing, but results I found online suggested the output of ip -6 route and ip addr could be useful: $ ip -6 route 2001:470:1f04:454::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev virbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 default dev he-ipv6 metric 1024 mtu 1480 advmss 1420 hoplimit 4294967295 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:1c:c0:a1:98:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1 inet6 fe80::21c:c0ff:fea1:98b2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 36:4c:33:ab:0d:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::344c:33ff:feab:dc6/64 scope link valid_lft forever preferred_lft forever 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:76:62:6e:65:74 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 7e:29:5e:7c:ba:93 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 7: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN link/sit 24.130.225.239 peer 72.52.104.74 inet6 2001:470:1f04:454::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1882:e1ef/128 scope link valid_lft forever preferred_lft forever

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Error connecting to Sonicwall L2TP VPN from iPad/iPhone

    - by db2
    A client has a Sonicwall Pro 2040 running SonicOS 3.0, and they'd like to be able to use the L2TP VPN client from their iPads to connect to internal services (Citrix, etc). I've enabled the L2TP VPN server on the Sonicwall, made sure to set AES-128 for phase 2, and set up the configuration on a test iPad with the appropriate username, password, and pre-shared key. When I attempt to connect, I get some rather cryptic error messages in the log on the Sonicwall: 2 03/29/2011 12:25:09.096 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 3 03/29/2011 12:25:09.096 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 4 03/29/2011 12:25:07.048 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 5 03/29/2011 12:25:07.048 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 The console log on the iPad looks like this: Mar 29 13:31:24 Daves-iPad racoon[519] <Info>: [519] INFO: ISAKMP-SA established 10.10.130.7[500]-[WAN IP address redacted][500] spi:5d705eb6c760d709:458fcdf80ee8acde Mar 29 13:31:24 Daves-iPad racoon[519] <Notice>: IPSec Phase1 established (Initiated by me). Mar 29 13:31:24 Daves-iPad kernel[0] <Debug>: launchd[519] Builtin profile: racoon (sandbox) Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] INFO: initiate new phase 2 negotiation: 10.10.130.7[500]<=>[WAN IP address redacted][500] Mar 29 13:31:25 Daves-iPad racoon[519] <Notice>: IPSec Phase2 started (Initiated by me). Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:55 Daves-iPad pppd[518] <Notice>: IPSec connection failed Does this offer any clues as to what's going wrong?

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >