Search Results

Search found 1795 results on 72 pages for 'veritas cluster'.

Page 69/72 | < Previous Page | 65 66 67 68 69 70 71 72  | Next Page >

  • Memory Leak Issue in Weblogic, SUN, Apache and Oracle classes Options

    - by Amit
    Hi All, Please find below the description of memory leaks issues. Statistics show major growth in the perm area (static classes). Flows were ran for 8 hours , Heap dump was taken after 2 hours and at the end. A growth in Perm area was identified Statistics show from our last run 240MB growth in 6 hour,40mb growth every hour 2GB heap –can hold ¾ days ,heap will be full in ¾ days Heap dump show –growth in area as mentioned below JMS connection/session Area Apache org.apache.xml.dtm.DTM[] org.apache.xml.dtm.ref.ExpandedNameTable$ExtendedType org.jdom.AttributeList org.jdom.Content[] org.jdom.ContentList org.jdom.Element SUN * ConstantPoolCacheKlass * ConstantPoolKlass * ConstMethodKlass * MethodDataKlass * MethodKlass * SymbolKlass byte[] char[] com.sun.org.apache.xml.internal.dtm.DTM[] com.sun.org.apache.xml.internal.dtm.ref.ExtendedType java.beans.PropertyDescriptor java.lang.Class java.lang.Long java.lang.ref.WeakReference java.lang.ref.SoftReference java.lang.String java.text.Format[] java.util.concurrent.ConcurrentHashMap$Segment java.util.LinkedList$Entry Weblogic com.bea.console.cvo.ConsoleValueObject$PropertyInfo com.bea.jsptools.tree.TreeNode com.bea.netuix.servlets.controls.content.StrutsContent com.bea.netuix.servlets.controls.layout.FlowLayout com.bea.netuix.servlets.controls.layout.GridLayout com.bea.netuix.servlets.controls.layout.Placeholder com.bea.netuix.servlets.controls.page.Book com.bea.netuix.servlets.controls.window.Window[] com.bea.netuix.servlets.controls.window.WindowMode javax.management.modelmbean.ModelMBeanAttributeInfo weblogic.apache.xerces.parsers.SecurityConfiguration weblogic.apache.xerces.util.AugmentationsImpl weblogic.apache.xerces.util.AugmentationsImpl$SmallContainer weblogic.apache.xerces.util.SymbolTable$Entry weblogic.apache.xerces.util.XMLAttributesImpl$Attribute weblogic.apache.xerces.xni.QName weblogic.apache.xerces.xni.QName[] weblogic.ejb.container.cache.CacheKey weblogic.ejb20.manager.SimpleKey weblogic.jdbc.common.internal.ConnectionEnv weblogic.jdbc.common.internal.StatementCacheKey weblogic.jms.common.Item weblogic.jms.common.JMSID weblogic.jms.frontend.FEConnection weblogic.logging.MessageLogger$1 weblogic.logging.WLLogRecord weblogic.rjvm.BubblingAbbrever$BubblingAbbreverEntry weblogic.rjvm.ClassTableEntry weblogic.rjvm.JVMID weblogic.rmi.cluster.ClusterableRemoteRef weblogic.rmi.internal.CollocatedRemoteRef weblogic.rmi.internal.PhantomRef weblogic.rmi.spi.ServiceContext[] weblogic.security.acl.internal.AuthenticatedSubject weblogic.security.acl.internal.AuthenticatedSubject$SealableSet weblogic.servlet.internal.ServletRuntimeMBeanImpl weblogic.transaction.internal.XidImpl weblogic.utils.collections.ConcurrentHashMap$Entry Oracle XA Transaction oracle.jdbc.driver.Binder[] oracle.jdbc.driver.OracleDatabaseMetaData oracle.jdbc.driver.T4C7Ocommoncall oracle.jdbc.driver.T4C7Oversion oracle.jdbc.driver.T4C8Oall oracle.jdbc.driver.T4C8Oclose oracle.jdbc.driver.T4C8TTIBfile oracle.jdbc.driver.T4C8TTIBlob oracle.jdbc.driver.T4C8TTIClob oracle.jdbc.driver.T4C8TTIdty oracle.jdbc.driver.T4C8TTILobd oracle.jdbc.driver.T4C8TTIpro oracle.jdbc.driver.T4C8TTIrxh oracle.jdbc.driver.T4C8TTIuds oracle.jdbc.driver.T4CCallableStatement oracle.jdbc.driver.T4CClobAccessor oracle.jdbc.driver.T4CConnection oracle.jdbc.driver.T4CMAREngine oracle.jdbc.driver.T4CNumberAccessor oracle.jdbc.driver.T4CPreparedStatement oracle.jdbc.driver.T4CTTIdcb oracle.jdbc.driver.T4CTTIk2rpc oracle.jdbc.driver.T4CTTIoac oracle.jdbc.driver.T4CTTIoac[] oracle.jdbc.driver.T4CTTIoauthenticate oracle.jdbc.driver.T4CTTIokeyval oracle.jdbc.driver.T4CTTIoscid oracle.jdbc.driver.T4CTTIoses oracle.jdbc.driver.T4CTTIOtxen oracle.jdbc.driver.T4CTTIOtxse oracle.jdbc.driver.T4CTTIsto oracle.jdbc.driver.T4CXAConnection oracle.jdbc.driver.T4CXAResource oracle.jdbc.oracore.OracleTypeADT[] oracle.jdbc.xa.OracleXAResource$XidListEntry oracle.net.ano.Ano oracle.net.ns.ClientProfile oracle.net.ns.ClientProfile oracle.net.ns.NetInputStream oracle.net.ns.NetOutputStream oracle.net.ns.SessionAtts oracle.net.nt.ConnOption oracle.net.nt.ConnStrategy oracle.net.resolver.AddrResolution oracle.sql.CharacterSet1Byte we are using Oracle BEA Weblogic 9.2 MP3 JDK 1.5.12 Oracle versoin 10.2.0.4 (for oracle we found one path which is needed to applied to avoid XA transaction memory leaks). But we are stuck to resolve SUN, BEA Weblgogic and Apache leaks. please suggest... regards, Amit J.

    Read the article

  • Loading, listing, and using R Modules and Functions in PL/R

    - by Dave Jarvis
    I am having difficulty with: Listing the R packages and functions available to PostgreSQL. Installing a package (such as Kendall) for use with PL/R Calling an R function within PostgreSQL Listing Available R Packages Q.1. How do you find out what R modules have been loaded? SELECT * FROM r_typenames(); That shows the types that are available, but what about checking if Kendall( X, Y ) is loaded? For example, the documentation shows: CREATE TABLE plr_modules ( modseq int4, modsrc text ); That seems to allow inserting records to dictate that Kendall is to be loaded, but the following code doesn't explain, syntactically, how to ensure that it gets loaded: INSERT INTO plr_modules VALUES (0, 'pg.test.module.load <-function(msg) {print(msg)}'); Q.2. What would the above line look like if you were trying to load Kendall? Q.3. Is it applicable? Installing R Packages Using the "synaptic" package manager the following packages have been installed: r-base r-base-core r-base-dev r-base-html r-base-latex r-cran-acepack r-cran-boot r-cran-car r-cran-chron r-cran-cluster r-cran-codetools r-cran-design r-cran-foreign r-cran-hmisc r-cran-kernsmooth r-cran-lattice r-cran-matrix r-cran-mgcv r-cran-nlme r-cran-quadprog r-cran-robustbase r-cran-rpart r-cran-survival r-cran-vr r-recommended Q.4. How do I know if Kendall is in there? Q.5. If it isn't, how do I find out what package it is in? Q.6. If it isn't in a package suitable for installing with apt-get (aptitude, synaptic, dpkg, what have you), how do I go about installing it on Ubuntu? Q.7. Where are the installation steps documented? Calling R Functions I have the following code: EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'corr( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; This code calls the PostgreSQL function corr to calculate Pearson's correlation over the data. Ideally, I'd like to do the following (by switching corr for plr_kendall): EXECUTE 'SELECT ' 'regr_slope( amount, year_taken ),' 'regr_intercept( amount, year_taken ),' 'plr_kendall( amount, year_taken ),' 'sum( measurements ) AS total_measurements ' 'FROM temp_regression' INTO STRICT slope, intercept, correlation, total_measurements; Q.8. Do I have to write plr_kendall myself? Q.9. Where can I find a simple example that walks through: Loading an R module into PG. Writing a PG wrapper for the desired R function. Calling the PG wrapper from a SELECT. For example, would the last two steps look like: create or replace function plr_kendall( _float8, _float8 ) returns float as ' agg_kendall(arg1, arg2) ' language 'plr'; CREATE AGGREGATE agg_kendall ( sfunc = plr_array_accum, basetype = float8, -- ??? stype = _float8, -- ??? finalfunc = plr_kendall ); And then the SELECT as above? Thank you!

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • Postergres could not connect to server

    - by Gary Lai
    After I did brew update and brew upgrade, my postgres got some problem. I tried to uninstall postgres and install again, but it didn't work as well. This is the error message.(I also got this error message when I try to do rake db:migrate) $ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? How can I solve it? Mac version: Mountain lion. homebrew version: 0.9.3 postgres version: psql (PostgreSQL) 9.2.1 And this is what I did. 12:30 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.2.1... 12:31 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.1.4... 12:31 ~/D/works$ psql --version bash: /usr/local/bin/psql: No such file or directory 12:33 ~/D/works$ brew install postgresql ==> Downloading http://ftp.postgresql.org/pub/source/v9.2.1/postgresql-9.2.1.tar.bz2 Already downloaded: /Library/Caches/Homebrew/postgresql-9.2.1.tar.bz2 ...... ...... ==> Summary /usr/local/Cellar/postgresql/9.2.1: 2814 files, 38M, built in 2.7 minutes 12:37 ~/D/works$ initdb /usr/local/var/postgres -E utf8 The files belonging to this database system will be owned by user "laigary". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default text search configuration will be set to "english". initdb: directory "/usr/local/var/postgres" exists but is not empty If you want to create a new database system, either remove or empty the directory "/usr/local/var/postgres" or run initdb with an argument other than "/usr/local/var/postgres". 12:39 ~/D/works$ mkdir -p ~/Library/LaunchAgents 12:39 ~/D/works$ cp /usr/local/Cellar/postgresql/9.2.1/homebrew.mxcl.postgresql.plist ~/Library/LaunchAgents/ 12:39 ~/D/works$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist homebrew.mxcl.postgresql: Already loaded 12:39 ~/D/works$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start server starting 12:39 ~/D/works$ env ARCHFLAGS="-arch x86_64" gem install pg Building native extensions. This could take a while... Successfully installed pg-0.14.1 1 gem installed 12:42 ~/D/works$ psql --version psql (PostgreSQL) 9.2.1 12:42 ~/D/works$ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • Different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • Issue accessing remote Infinispan mbeans

    - by user1960172
    I am able to access the Mbeans by local Jconsole but not able to access the MBEANS from a remote Host. My COnfiguration: <?xml version='1.0' encoding='UTF-8'?> <server xmlns="urn:jboss:domain:1.4"> <extensions> <extension module="org.infinispan.server.endpoint"/> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.clustering.jgroups"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.modcluster"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> </extensions> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <local default-user="$local"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <local default-user="$local" allowed-users="*"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.2"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE" autoflush="true"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="org.jboss.as.config"> <level name="DEBUG"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:infinispan:server:endpoint:6.0"> <hotrod-connector socket-binding="hotrod" cache-container="clustered"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <memcached-connector socket-binding="memcached" cache-container="clustered"/> <!--<rest-connector virtual-server="default-host" cache-container="clustered" security-domain="other" auth-method="BASIC"/> --> <rest-connector virtual-server="default-host" cache-container="clustered" /> <websocket-connector socket-binding="websocket" cache-container="clustered"/> </subsystem> <subsystem xmlns="urn:jboss:domain:datasources:1.1"> <datasources/> </subsystem> <subsystem xmlns="urn:infinispan:server:core:5.3" default-cache-container="clustered"> <cache-container name="clustered" default-cache="default"> <transport executor="infinispan-transport" lock-timeout="60000"/> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER"> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> </distributed-cache> <distributed-cache name="memcachedCache" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER"> <locking isolation="READ_COMMITTED" acquire-timeout="30000" concurrency-level="1000" striping="false"/> <transaction mode="NONE"/> </distributed-cache> <distributed-cache name="namedCache" mode="SYNC" start="EAGER"/> </cache-container> <cache-container name="security"/> </subsystem> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="${jboss.default.jgroups.stack:udp}"> <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="PING"/> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL"/> <protocol type="pbcast.NAKACK"/> <protocol type="UNICAST2"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> <protocol type="RSVP"/> </stack> <stack name="tcp"> <transport type="TCP" socket-binding="jgroups-tcp"/> <!--<protocol type="MPING" socket-binding="jgroups-mping"/>--> <protocol type="TCPPING"> <property name="initial_hosts">10.32.50.53[7600],10.32.50.64[7600]</property> </protocol> <protocol type="MERGE2"/> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/> <protocol type="FD"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK"> <property name="use_mcast_xmit">false</property> </protocol> <protocol type="UNICAST2"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG2"/> <protocol type="RSVP"/> </stack> </subsystem> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector use-management-endpoint="false"/> </subsystem> <subsystem xmlns="urn:jboss:domain:modcluster:1.1"> <mod-cluster-config advertise-socket="modcluster" connector="ajp" excluded-contexts="console"> <dynamic-load-provider> <load-metric type="busyness"/> </dynamic-load-provider> </mod-cluster-config> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.2"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:security:1.2"> <security-domains> <security-domain name="other" cache-type="infinispan"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="infinispan"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"> <thread-factory name="infinispan-factory" group-name="infinispan" priority="5"/> <unbounded-queue-thread-pool name="infinispan-transport"> <max-threads count="25"/> <keepalive-time time="0" unit="milliseconds"/> <thread-factory name="infinispan-factory"/> </unbounded-queue-thread-pool> </subsystem> <subsystem xmlns="urn:jboss:domain:transactions:1.2"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <connector name="ajp" protocol="AJP/1.3" scheme="http" socket-binding="ajp"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="example.com"/> </virtual-server> </subsystem> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:10.32.222.111}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:10.32.222.111}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8089"/> <socket-binding name="hotrod" port="11222"/> <socket-binding name="http" port="8080"/> <socket-binding name="https" port="8443"/> <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" port="54200"/> <socket-binding name="memcached" port="11211"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.115" multicast-port="23364"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <socket-binding name="websocket" port="8181"/> </socket-binding-group> </server> Remote Process: service:jmx:remoting-jmx://10.32.222.111:4447 I added user to both management and application realm admin=2a0923285184943425d1f53ddd58ec7a test=2b1be81e1da41d4ea647bd82fc8c2bc9 But when i try to connect its says's: Connection failed: Retry When i use Remote process as:10.32.222.111:4447 on the sever it prompts a warning : 16:29:48,084 ERROR [org.jboss.remoting.remote.connection] (Remoting "djd7w4r1" read-1) JBREM000200: Remote connection failed: java.io.IOException: Received an invali d message length of -2140864253 Also disabled Remote authentication: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=12345 Still not able to connect. Any help will be highly appreciated . Thanks

    Read the article

  • MATLAB: different function returns from command line and within function

    - by Myx
    Hello: I have an extremely bizzare situation: I have a function in MATLAB which calls three other main functions and produces two figures for me. The function reads in an input jpeg image, crops it, segments it using kmeans clustering, and outputs 2 figures to the screen - the original image and the clustered image with the cluster centers indicated. Here is the function in MATLAB: function [textured_avg_x photo_avg_x] = process_database_images() clear all warning off %#ok type_num_max = 3; % type is 1='texture', 2='graph', or 3='photo' type_num_max = 1; img_max_num_photo = 100; % 400 photo images img_max_num_other = 100; % 100 textured, and graph images for type_num = 1:2:type_num_max if(type_num == 3) img_num_max = img_max_num_photo; else img_num_max = img_max_num_other; end img_num_max = 1; for img_num = 1:img_num_max [type img] = load_image(type_num, img_num); %img = imread('..\images\445.jpg'); img = crop_image(img); [IDX k block_bounds features] = segment_image(img); end end end The function segment_image first shows me the color image that was passed in, performs kmeans clustering, and outputs the clustered image. When I run this function on a particular image, I get 3 clusters (which is not what I expect to get). When I run the following commands from the MATLAB command prompt: >> img = imread('..\images\texture\1.jpg'); >> img = crop_image(img); >> segment_image(img); then the first image that is displayed by segment_image is the same as when I run the function (so I know that the clustering is done on the same image) but the number of clusters is 16 (which is what I expect). In fact, when I run my process_database_images() function on my entire image database, EVERY image is evaluated to have 3 clusters (this is a problem), whereas when I test some images individually, I get in the range of 12-16 clusters, which is what I prefer and expect. Why is there such a discrepancy? Am I having some syntax bug in my process_database_images() function? If more code is required from me (i.e. segment_images function, or crop_image function), please let me know. Thanks.

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • I'm searching for a messaging platform (like XMPP) that allows tight integration with a web applicat

    - by loxs
    Hi, At the company I work for, we are building a cluster of web applications for collaboration. Things like accounting, billing, CRM etc. We are using a RESTfull technique: For database we use CouchDB Different applications communicate with one another and with the database via http. Besides, we have a single sign on solution, so that when you login in one application, you are automatically logged to the other. For all apps we use Python (Pylons). Now we need to add instant messaging to the stack. We need to support both web and desktop clients. But just being able to chat is not enough. We need to be able to achieve all of the following (and more similar things). When somebody gets assigned to a task, they must receive a message. I guess this is possible with some system daemon. There must be an option to automatically group people in groups by lots of different properties. For example, there must be groups divided both by geographical location, by company division, by job type (all the programers from different cities and different company divisions must form a group), so that one can send mass messages to a group of choice. Rooms should be automatically created and destroyed. For example when several people visit the same invoice, a room for them must be automatically created (and they must auto-join). And when all leave the invoice, the room must be destroyed. Authentication and authorization from our applications. I can implement this using custom solutions like hookbox http://hookbox.org/docs/intro.html but then I'll have lots of problems in supporting desktop clients. I have no former experience with instant messaging. I've been reading about this lately. I've been looking mostly at things like ejabberd. But it has been a hard time and I can't find whether what I want is possible at all. So I'd be happy if people with experience in this field could help me with some advice, articles, tales of what is possible etc.

    Read the article

  • Why does Rails with Passenger/nginx only works in development mode? No logs available

    - by Michael W.
    Hey folks, I have a serious problem with one of our webservers... after having an internal alpha-testing with a mongrel/haproxy-cluster that worked well, we wanted to use nginx with passenger for our first production server (customers will access this server). However, I can only run the rails app via development mode with passenger/nginx. The app itself runs perfect with mongrel or webrick in production mode. My biggest problem with this case is that I don't find ANY information in the nginx or rails-logs (only when I use mongrel or webrick). Permissions are correct. Passenger-status shows that the app is running, but I always get the static 500.html-error page... It would be so nice if you guys could give me a hint and help me solve the problem. I put the config at the bottom of the post... This exact config works with rails_env development;but I'd like to use the production mode ;-) Thank you very much for your help! Version: Ubuntu 8.04.2 64bit / nginx-0.7.64 (compiled and installed via passenger-2.2.11) cat /opt/nginx/conf/nginx.conf user www-data; worker_processes 4; error_log logs/error.log; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.11; passenger_ruby /usr/bin/ruby1.8; passenger_log_level 3; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name <<servername>>; root /srv/app01/public; passenger_enabled on; }

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • Paperclip not running tasks but not showing errors

    - by Trip
    This is strange. I just did a deploy to a cluster server, and since then, pictures have not been processing. Reading the logs, I usually do not get an error at all, but they never finish. However, on one particular image, I found this little bit at least, but this might not explain everything.. Any ideas? Processing PhotosController#edit (for 69.248.152.173 at 2010-05-27 04:25:12) [GET] Parameters: {"gallery_id"="2102", "action"="edit", "type"="photo", "id"="15453", "crop"="true", "controller"="photos", "organization_id"="470", "_"="1274959512393"} Rendering media/crop_photo ActionView::TemplateError (/data/HQ_Channel/releases/20100524111501/public/system/photos/15453/original/DSC05193.JPG is not recognized by the 'identify' command.) on line #4 of app/views/media/crop_photo.js.haml: 1: == $("#media_header").html('#{ escape_javascript(render :partial = 'media/crop_photo') }').slideDown("slow"); 2: 3: :plain 4: function updateForm(coords) 5: { 6: var rx = #{PHOTO_IMAGE_WIDTH} / coords.w; 7: var ry = #{PHOTO_IMAGE_HEIGHT} / coords.h; vendor/gems/thoughtbot-paperclip-2.3.1/lib/paperclip/geometry.rb:24:in `from_file' app/models/photo.rb:68:in `photo_geometry' app/views/media/crop_photo.js.haml:4:in `_run_haml_app47views47media47crop_photo46js46haml' haml (2.2.2) [v] lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/photos_controller.rb:81:in `crop' app/controllers/photos_controller.rb:24:in `edit' haml (2.2.2) [v] rails/./lib/sass/plugin/rails.rb:19:in `process' lib/flash_session_cookie_middleware.rb:14:in `call' vendor/gems/hoptoad_notifier-2.2.2/lib/hoptoad_notifier/rack.rb:27:in `call' ** [Hoptoad] Failure: Net::HTTPClientError ** [Hoptoad] Environment Info: [Ruby: 1.8.6] [Rails: 2.3.3] [Env: production] ** [Hoptoad] Response from Hoptoad: No project exists with the given API key. Rendering /data/HQ_Channel/releases/20100524111501/public/500.html (500 Internal Server Error) And then a little later, I got this : ActionView::TemplateError (/data/HQ_Channel/releases/20100524111501/public/system/photos/15453/original/DSC05193.JPG is not recognized by the 'identify' command.) on line #4 of app/views/media/crop_photo.js.haml: 1: == $("#media_header").html('#{ escape_javascript(render :partial = 'media/crop_photo') }').slideDown("slow"); 2: 3: :plain 4: function updateForm(coords) 5: { 6: var rx = #{PHOTO_IMAGE_WIDTH} / coords.w; 7: var ry = #{PHOTO_IMAGE_HEIGHT} / coords.h; vendor/gems/thoughtbot-paperclip-2.3.1/lib/paperclip/geometry.rb:24:in `from_file' app/models/photo.rb:68:in `photo_geometry' app/views/media/crop_photo.js.haml:4:in `_run_haml_app47views47media47crop_photo46js46haml' haml (2.2.2) [v] lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/photos_controller.rb:81:in `crop' app/controllers/photos_controller.rb:24:in `edit' haml (2.2.2) [v] rails/./lib/sass/plugin/rails.rb:19:in `process' lib/flash_session_cookie_middleware.rb:14:in `call' vendor/gems/hoptoad_notifier-2.2.2/lib/hoptoad_notifier/rack.rb:27:in `call' ** [Hoptoad] Failure: Net::HTTPClientError ** [Hoptoad] Environment Info: [Ruby: 1.8.6] [Rails: 2.3.3] [Env: production] ** [Hoptoad] Response from Hoptoad: No project exists with the given API key. Rendering /data/HQ_Channel/releases/20100524111501/public/500.html (500 Internal Server Error)

    Read the article

  • Ubuntu server random hangups.

    - by Ebbe
    Hello all. this is my first post to this forum which I found through the superb podcast "It Conversations" from StackOverFlow. I am quite in my role as server administrator for an exhibition center in London. Basically we have a central file and sql server to which roughly 40 stations connects to to upload/download data used/captured by a set of applications. Over the last weeks we have experienced a few random hangups to our applications, and as it always happen to multiple applications simultaneously I do not believe that the applications are the source of the problem. We also monitor the network using Dartware Intermapper which indicates that all switches and stations on the network has been reachable during the downtime. Thus, its all pointing to the server. I have been looking through all log files I can think of and the only thing so far that I have found suspicious is the following lines in the syslog which are from the time of one of the hangups: Feb 6 17:14:27 es named[5582]: client 127.0.0.1#33721: RFC 1918 response from Internet for 150.0.168.192.in-addr.arpa Feb 6 17:14:40 es named[5582]: client 127.0.0.1#32899: RFC 1918 response from Internet for 152.0.168.192.in-addr.arpa Feb 6 17:15:01 es /USR/SBIN/CRON[1956]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:16:06 es /USR/SBIN/CRON[2031]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:21:00 es named[5582]: *** POKED TIMER *** Feb 6 17:21:00 es last message repeated 2 times Feb 6 17:21:07 es named[5582]: client 127.0.0.1#44194: RFC 1918 response from Internet for 143.0.168.192.in-addr.arpa Feb 6 17:21:12 es named[5582]: client 127.0.0.1#59004: RFC 1918 response from Internet for 164.0.168.192.in-addr.arpa I find a few lines of interesting lines here: 1) "RFC 1918 response from Internet for 150.1.168.192.in-addr.arpa". I see this a lot in the syslog. And basically everytime I do a nslookup for any of the computers in the cluster I get a new similar line in the syslog. I understand from google that this has to do with reverse lookup problems. But I do not know how that could effect the systems. Lets say that one of these lines appear every time one of the userstations connects to the server, which may happen several times a second. Could this possible cause a hangup of the entire server? 2) POKED TIMER, I have googled this quite a lot, but not found an explaination that I can relate to. What does this mean? 3) The timestamps, it seems like the entire server has stopped responding for several minutes. Normally there are many printouts to the syslog per minute on this server. Furthermore the CRON job is set to run once every minute. Which according to the log, hasent happened here. OS: Ubuntu 8.04 Kernel: Linux 2.6.24-24-server x86_64 GNU/Linux. Hardware: Dell R710, RAID1, CPU: 2x XEON E5530. 16GB Memory. Average load is very low, and memory should not be a problem. Please let me know if you need any additional information. Best wishes, Ebbe

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • How to configure multiple iSCSI Portal Groups on a EqualLogic PS6100?

    - by kce
    I am working on a migration from a VMware vSphere environment to a Hyper-V Cluster utilizing Windows Server 2012 R2. The setup is pretty small, an EqualLogic PS6100e and two Dell PowerConnect 5424 switches and handful of R710s and R620s. The SAN was configured as a non-RFC1918 network that is not assigned to our organization and since I am working on building a new virtualization environment I figured that this would be an appropriate time to do a subnet migration. I configured a separate VLAN and subnet on the switches and the two previously unused NICs on the PS6100's controllers. At this time I only have a single Hyper-V host cabled in but I can successfully ping the PS6100 from the host. From the PS6100 I can ping each of the four NICs that currently on the storage network. I cannot connect the Microsoft iSCSI Initiator to the Target. I have successfully added the Target Portals (the IP addresses of PS6100 NICs) and the Targets are discovered but listed as inactive. If I try to Connect to them I get the following error, "Log onto Target - Connection Failed" and ISCSIPrt 1 and 70 events are recorded in the Event Log. I have verified that access control to the volume is not the problem by temporarily disabling it. I suspect the problem is with the Portal Group IP address which is still listed as Group Address of old subnet (I know, I know I might be committing the sin of the X/Y problem but everything else looks good): RFC3720 has this to say about Network Portal and Portal Groups: Network Portal: The Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node within that Network Entity for the connection(s) within one of its iSCSI sessions. A Network Portal in an initiator is identified by its IP address. A Network Portal in a target is identified by its IP address and its listening TCP port. Portal Groups: iSCSI supports multiple connections within the same session; some implementations will have the ability to combine connections in a session across multiple Network Portals. A Portal Group defines a set of Network Portals within an iSCSI Network Entity that collectively supports the capability of coordinating a session with connections spanning these portals. Not all Network Portals within a Portal Group need participate in every session connected through that Portal Group. One or more Portal Groups may provide access to an iSCSI Node. Each Network Portal, as utilized by a given iSCSI Node, belongs to exactly one portal group within that node. The EqualLogic Group Manager documentation has this to say about the Group IP Address: You use the group IP address as the iSCSI discovery address when connecting initiators to iSCSI targets in the group. If you modify the group IP address, you might need to change your initiator configuration to use the new discovery address Changing the group IP address disconnects any iSCSI connections to the group and any administrators logged in to the group through the group IP address. Which sounds equivalent to me (I am following up with support to confirm). I think a reasonable explanation at this point is that the Initiator can't complete the connection to the Target because the Group IP Address / Network Portal is on a different subnet. I really want to avoid a cutover and would prefer to run both subnets side-by-side until I can install and configure each Hyper-V host. Question/s: Is my assessment at all reasonable? Is it possible to configure multiple Group IP Addresses on the EqualLogic PS6100? I don't want to just change it as it will disconnect the remaining ESXi hosts. Am I just Doing It Wrong(TM)?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72  | Next Page >