Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 501/909 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Suscribers locking during snapshot replication

    - by remi_bourgarel
    Hi all :) Here is my architecture : I have a main server and 4 replica (the servers are synchronized with snapshot replication). The replication is working fine, except for one thing : during the replication a lot of SELECT request on one of the replica fail with a time-out. Here are my questions : Can I avoid these time-out ? If I can't : how can I detect the start and the end of the replication to redirect all the request on one of the replica to the main ? Thanks Sorry if you already answered to that kind of question but I couldn't find anything.

    Read the article

  • Installing Java 1.5 on Ubuntu?

    - by StackedCrooked
    I already have Java 1.6, but I need to test something with 1.5. I have downloaded the .bin file from http://java.sun.com/javase/downloads/index_jdk5.jsp using the Sun Download Manager. Now I want to create a deb file from this bin file: $ fakeroot make-jpkg java_ee_sdk-5_01-linux.bin Creating temporary directory: /tmp/make-jpkg.Zpm1Y7LbZ0 Loading plugins: blackdown-j2re.sh blackdown-j2sdk.sh common.sh ibm-j2re.sh ibm-j2sdk.sh j2re.sh j2sdk-doc.sh j2sdk.sh j2se.sh sun-j2re.sh sun-j2sdk-doc.sh sun-j2sdk.sh Detected Debian build architecture: i386 Detected Debian GNU type: i486-linux-gnu No matching plugin was found. Removing temporary directory: done How can I fix the "No matching plugin was found." error?

    Read the article

  • Outgoing UDP sniffer in python?

    - by twneale
    I want to figure out whether my computer is somehow causing a UDP flood that is originating from my network. So that's my underlying problem, and what follows is simply my non-network-person attempt to hypothesize a solution using python. I'm extrapolating from recipe 13.1 ("Passing Messages with Socket Datagrams") from the python cookbook (also here). Would it possible/sensible/not insane to try somehow writing an outgoing UDP proxy in python, so that outgoing packets could be logged before being sent on their merry way? If so, how would one go about it? Based on my quick research, perhaps I could start a server process listening on suspect UDP ports and log anything that gets sent, then forward it on, such as: import socket s =socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind(("", MYPORT)) while True: packet = dict(zip('data', 'addr'), s.recvfrom(1,024)) log.info("Recieved {data} from {addr}.".format(**packet)) But what about doing this for a large number of ports simultaneously? Impractical? Are there drawbacks or other reasons not to bother with this? Is there a better way to solve this problem (please be gentle).

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • catch-22 with apt-get

    - by Mark J Seger
    I'd recently installed a package and discovered my scp stopped working. After removing and installing some things I got it fixed but then I stated getting errors in apt-get about dpkg: error: configuration error: /etc/dpkg/dpkg.cfg.d/multiarch:1: unknown option 'foreign-architecture' so I commented it out and thought that fixed the problem until I discovered the chrome icon in my launcher turned into a ? and chrome no longer worked. I tried to reinstall it and got the apt-get error: "ambiguous package name 'libglib2.0-0' with more than one installed instance" if I try to remove libglib2 I get the error The following packages have unmet dependencies: iceape-browser : Depends: iceape but it is not going to be installed iceape-chatzilla : Depends: iceape (>= 2.7.11) but it is not going to be installed Depends: iceape (<= 2.7.11-1.1~) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. so then I tried to remove icape-browser and it complained about the 2 instances of libglinb2. in fact virtually any command I issue does the same thing so I don't know what I can do to untangle things.

    Read the article

  • Right way of making muti-site and multi-lingual website on codeigniter

    - by DR.GEWA
    Hi there. Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can ) So here is the problem. I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook. It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files. I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change . So what is eating my nervs and enery is following. I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites I see it like this on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website. Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users

    Read the article

  • What is "2LUN" mode in connection with RAID?

    - by naxa
    I've came across RAID products that also list JBOD (just a bunch of disks) mode and 2LUN mode. What the heck is 2LUN mode? I could not find a description; the closest thing seems to be LUN 'logical unit number' but I don't get the 2LUN thing. UPDATE 1 This is what Wikipedia has to say about JBOD: JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard drives, while making them accessible either as independent hard drives, or as a combined (spanned) single logical volume with no actual RAID functionality. So JBOD can actually mean two different (albeit related) things. Answer of Guest says 2LUN means no spanning. Does this suggest that 2LUN would simply mean the JBOD-variant with no span?

    Read the article

  • Java Framework Choice Question.

    - by Sarang
    We do have many frameworks available in Java. Struts, Swing, JSF 2.0, Spring etc are used as per their priority. Actually, I don't know how many they are as well! But, as I am fresher to Java, even learning after their architecture, I cannot decide which framework can be used with what type of Projects ! Also, I am confused with mixed use of framework like Spring + JSF. What is the benefit to it ? Another thing making me confusing is about the UI components available in market. Like, we do have Primefaces, Ice-faces, MyFaces, Rich-faces. They may or may not have been supporting AJAX in-built. They may contains some bugs as well. What is best choice for Framework + UI component that can directly provide a best feet solution for any project ?

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Parameters for selection of Operating system, memory and processor for embedded system ?

    - by James
    I am developing an embedded real time system software (in C language). I have designed the s/w architecture - we know various objects required, interactions required between various objects and IPC communication between tasks. Based on this information, i need to decide on the operating system(RTOS), microprocessor and memory size requirements. (Most likely i would be using Quadros, as it has been suggested by the client based on their prior experience in similar projects) But i am confused about which one to begin with, since choice of one could impact the selection of other. Could you also guide me on parameters to consider to estimate the memory requirements from the s/w design (lower limit and upper limit of memory requirement) ? (Cost of the component(s) could be ignored for this evaluation)

    Read the article

  • Unable to run jQuery Masonry in a Sencha Touch 2 app

    - by torr
    In my ST2 app I have on index.html <script id="jquery" type="text/javascript" src="resources/js/jquery-1.8.2.js"></script> <script id="masonry" type="text/javascript" src="resources/js/masonry-2.1.5.js"></script> as well as <script> $(document).ready(function(){ var $container = $('.x-list-container'); $container.imagesLoaded(function(){ $container.masonry({ itemSelector: '.x-list-item', }); }); }); </script> My front page on the app is a list so it wraps elements as such <div class="x-list-container"> <div class="x-list-item"></div> // 'x-list-item' is the block I want to float <div class="x-list-item"></div> ... </div> However once everything is loaded and I inspect .x-list-item there is no style positioning the div. I am experience with Masonry on regular web app but can;t make it work here. It's like Masonry isn't running at all. It is being loaded (checked on the network tab) and jQuery is being loaded (on network tab and tested using if (jQuery) {alert('hey there');. Any idea what I'm doing wrong? }

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • Working WCF WebServices with NLB server

    - by gguth
    Im starting the architecture of a new project using WCF, but im not the right person to make some network considerations, so im doing some research but cannot find the answers to these questions: We´ll host the WCF service in a common Windows Service app in 2 servers and we´ll have another server to make the Load-Balancing job using the WNLB. The fact that we are hosting the WCF in a Windows Service app can disturb the NLB job? Before my research i thought the load balancing was tought to configure, but with NLB it seems to be very simple, its really that simple? Note: The binding will be basicHttpBinding

    Read the article

  • Can't connect to SQL Server 2005 Express from an ASP.NET C# page

    - by Aviv
    Hey guys, I have an MS SQL Server 2005 Express running on a VPS. I'm using pymssql in Python to connect to my server with the following code: conn = pymssql.connect(host='host:port', user='me', password='pwd', database='db') and it works perfectly. When I try to connect to the server from an ASP.NET C# page with the following code: SqlConnection myConnection = new SqlConnection("Data Source=host,port;Network Library=DBMSSOCN; Initial Catalog=db;User ID=me;Password=pwd;"); myConnection.Open(); When I run the ASP.NET page I get the following exception at myConnection.Open();: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) I tried restarting the SQL Server but I had no luck. Can anyone point me out to what I'm missing here? Thanks!

    Read the article

  • When using TCP load balancing with HAProxy, does all outbound traffic flow through the LB?

    - by user122875
    I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit?

    Read the article

  • Nothing gets displayed when debugging Microsoft Surface

    - by leftend
    I am trying to write my first Microsoft Surface application, and am doing the development on the Surface unit itself. I successfully re-created the quick "Photos" application that is shown on the PDC Video, however, now that I'm trying to create my own app - nothing actually shows up when I run it. I'm mainly just adding ScatterViews right now - and they show up fine in the designer, but as soon as I hit F5 - the shell is shown on the surface - but none of the ScatterViews show up. Here's my xaml code so far. Am I missing something?? <s:SurfaceWindow x:Class="SurfaceUITreeDemo.TreeDemo" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:s="http://schemas.microsoft.com/surface/2008" Title="WYITT Tree Demo" AllowsTransparency="False" Background="Beige"> <s:SurfaceWindow.Resources> <ImageBrush x:Key="WindowBackground" Stretch="None" Opacity="0.6" ImageSource="pack://application:,,,/Resources/TreeDemoBackground.jpg"/> </s:SurfaceWindow.Resources> <Grid Background="{StaticResource WindowBackground}" > <s:ScatterView Name="test"> <Image Source="C:\dev\Network-Alt-icon.png"/> </s:ScatterView> <s:ScatterView Height="100" Margin="456,160,348,0" Name="scatterView1" VerticalAlignment="Top" Background="Black"> <s:ScatterViewItem Height="100" Width="150"> <Image Source="C:\dev\Network-Alt-icon.png"/> </s:ScatterViewItem> <s:ScatterViewItem></s:ScatterViewItem> ScatterView </s:ScatterView> </Grid> </s:SurfaceWindow>

    Read the article

  • Reading system.net/mailSettings/smtp from Web.config in Medium trust environment

    - by Carson63000
    Hi, I have some inherited code which stores SMTP server, username, password in the system.net/mailSettings/smtp section of the Web.config. It used to read them like so: Configuration c = WebConfigurationManager.OpenWebConfiguration(HttpContext.Current.Request.ApplicationPath); MailSettingsSectionGroup settings = (MailSettingsSectionGroup)c.GetSectionGroup("system.net/mailSettings"); return settings.Smtp.Network.Host; But this was failing when I had to deploy to a medium trust environment. So following the answer from this question, I rewrote it to use GetSection() like so: SmtpSection settings = (SmtpSection)ConfigurationManager.GetSection("system.net/mailSettings/smtp"); return settings.Network.Host; But it's still giving me a SecurityException on Medium trust, with the following message: Request for ConfigurationPermission failed while attempting to access configuration section 'system.net/mailSettings/smtp'. To allow all callers to access the data for this section, set section attribute 'requirePermission' equal 'false' in the configuration file where this section is declared. So I tried this requirePermission attribute, but can't figure out where to put it. If I apply it to the <smtp> node, I get a ConfigurationError: "Unrecognized attribute 'requirePermission'. Note that attribute names are case-sensitive." If I apply it to the <mailSettings> node, I still get the SecurityException. Is there any way to get at this config section programatically under medium trust? Or should I just give up on it and move the setting into <appSettings>?

    Read the article

  • Android getting XML values

    - by Nils
    Hello, I have the following XML code, which I got by a UPnP device and like to get the res value - the RTSP URL. In this case rtsp://10.42.0.103:554/live.sdp How can I do this? I heard that Android has some built-in support for reading XML. Is that true? <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/"> <item id="11" parentID="1" restricted="1"> <dc:title>Network Camera Stream 1</dc:title> <upnp:class>object.item.videoItem</upnp:class> <res protocolInfo="rtsp-rtp-udp:*:video/mpeg4-generic:*" resolution="640x480">rtsp://10.42.0.103:554/live.sdp</res> </item> <item id="12" parentID="1" restricted="1"> <dc:title>Network Camera Stream 2</dc:title> <upnp:class>object.item.videoItem</upnp:class> <res protocolInfo="rtsp-rtp-udp:*:video/mpeg4-generic:*" resolution="176x144">rtsp://10.42.0.103:554/live2.sdp</res> </item> </DIDL-Lite>

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • How to make AD highly available for applications that use it as an LDAP service

    - by Beaming Mel-Bin
    Our situation We currently have many web applications that use LDAP for authentication. For this, we point the web application to one of our AD domain controllers using the LDAPS port (636). When we have to update the Domain Controller, this has caused us issues because one more web application could depend on any DC. What we want We would like to point our web applications to a cluster "virtual" IP. This cluster will consist of at least two servers (so that each cluster server could be rotated out and updated). The cluster servers would then proxy LDAPS connections to the DCs and be able to figure out which one is available. Questions For anyone that has had experience with this: What software did you use for the cluster? Any caveats? Or perhaps a completely different architecture to accomplish something similar?

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • Linux compilers for C/C++ on AMD "Bulldozer" CPUs like the Interlagos [closed]

    - by jstarek
    I am looking for a Linux compiler for C/C++ code that supports AMDs new "Bulldozer" architecture and produces efficient binaries for the Interlagos series Opterons. This seems to be a bit difficult because of the peculiarities of the Bulldozer microarchitecture. While AMD has a whitepaper with some details, I would like to see some independent analyses. The relevant paper from HeCToR focuses mostly on job placement and scheduling, which is an area we already investigate. So, who can recommend a good compiler comparison for Bulldozers running Linux? Does anyone have well-described benchmarks?

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >