Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 76/500 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Open Office: How to disable image link updates

    - by Max Kielland
    I'm writing a user manual to a card game and there is a looot of linked images. Open Office is working so slow because every time I flip to a page with linked images it starts to update them. Is it possible to tell Open Office to NOT update the links until I tell it to do so? I would like it to display the same snapshot it showed the last time I initiated link update. I'm using Open Office v3.3.0 // Thank you.

    Read the article

  • WCF Service behind F5 Bigip endpoint issue

    - by John
    we have WCF services hosted on IIS in 4 web servers, these web servers are behind the F5 Bigip load balancing system, SSL accelarator. When the client calls the service it'll be calling https myserver.com/myservices.svc but actually .svc will be in different servers like http web1.mysever.com/myservices.svc, http web2.mysever.com/myservices.svc, etc how do we handle this issue?

    Read the article

  • Populate a combo box with one data tabel and save the choice in another.

    - by Scott Chamberlain
    I have two data tables in a data set for example lets say Servers: | Server | Ip | Users: | Username | Password | Server | and there is a foreign key on users that points to servers. How do I make a combo box that list all of the servers. Then how do I make the selected server in that combo box be saved in the data set in the Server column of the users table. EDIT -- I havent tested it yet but is this the right way? this.bsServers.DataMember = "Servers"; this.bsServers.DataSource = this.dataSet; this.bsUsers.DataMember = "FK_Users_Servers"; this.bsUsers.DataSource = this.bsServers; this.serverComboBox.DataBindings.Add(new System.Windows.Forms.Binding("SelectedValue", this.bsUsers, "Server", true)); this.serverComboBox.DataSource = this.bsServers; this.serverComboBox.DisplayMember = "Server"; this.serverComboBox.ValueMember = "Server";

    Read the article

  • Updating Visual FoxPro from SQL Server

    - by David Stein
    I'm trying to update some simple Visual FoxPro tables with SQL Server. I've created a linked server with the following: sp_addlinkedserver @server = 'UTIL', @srvproduct = 'VFP', @provider = 'VFPOLEDB', @datasrc = 'L:\M2MDATA\Util\util.dbc' GO And the following works: select * from UTIL...utcomp However, I cannot use the following statement: update util...utcomp set fmaddress = '123 Elvis Dr.' where fcsqldb = 'M2MDATA01' I receive the error: OLE DB provider "VFPOLEDB" for linked server "util" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.". Msg 7333, Level 16, State 2, Line 2 Cannot fetch a row using a bookmark from OLE DB provider "VFPOLEDB" for linked server "util". I have the latest version (9.0) installed so I should have the latest provider. Am I doing something wrong? Is it not possible to update VFP from SQL?

    Read the article

  • How to link a SQlite Extension Source File into Xcode for iPhone?

    - by crunchyt
    I use a statically linked library for Sqlite in an iPhone Xcode project. I am now trying to include a .C extension to Sqlite in this project. However, I am having trouble making the Sqlite in the build SEE the extension. The statically linked Sqlite library works fine. Also the .C extension works on my desktop, and builds fine as a statically linked library in Xcode. However, the custom functions it defines are missing when called. For example, I load the extension as so with no errors. SELECT load_extension('extension_name.so'); But when I try to call a function defined in the extension, I get this message DB Error: 1 "no such function: custom_function" Does anyone know much about linking a Sqlite extension into an Xcode project?

    Read the article

  • Use of double pointer in linux kernel Hash list implementation

    - by bala1486
    Hi, I am trying to understand Linux Kernel implementation of linked list and hash table. A link to the implementation is here. I understood the linked list implementation. But i am little confused of why double pointers is being used in hlist (**pprev). Link for hlist is here. I understand that hlist is used in implementation of hash table since head of the list requires only one pointer and it saves space. Why cant it be done using single pointer (just *prev like the linked list)? Please help me.

    Read the article

  • ISQL Perform instruction: after editadd editupdate of table vs. after add update of table

    - by Frank Computer
    INFORMIX-SQL 7.3 Perform Screens: According to documentation, in an "after editadd editupdate of table" control block, its instructions are executed before the row is added or updated to the table, whereas in an "after add update of table" control block, its instructions are executed after the row has been added or updated to the table, supposedly meaning that any instructions which would alter values of field-tags linked to table.columns would not be committed to the table, but field-tags linked to displayonly fields will change?.. However, when using "after add update of table" I placed instructions which alter values for field-tags linked to table.columns and their displayed and committed values also changed!.. I would have thought that an "after add update of table" would only alter displayonly fields.

    Read the article

  • Question on Pointer Arithmetic

    - by pws5068
    Heyy Everybody! I am trying to create a memory management system, so that a user can call myMalloc, a method I created. I have a linked list keeping track of my free memory. My problem is when I am attempting to find the end of a free bit in my linked list. I am attempting to add the size of the memory free in that section (which is in the linked list) to the pointer to the front of the free space, like this. void *tailEnd = previousPlace->head_ptr + ((previousPlace->size+1)*(sizeof(int)); I was hoping that this would give me a pointer to the end of that segment. However, I keep getting the warning: "pointer of type 'void*' used in arithmetic" Is there a better way of doing this? Thanks!

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • DAO, Spring and Hibernate

    - by EugeneP
    Correct me if anything is wrong. Now when we use Spring DAO for ORM templates, when we use @Transactional attribute, we do not have control over the transaction and/or session when the method is called externally, not within the method. Lazy loading saves resources - less queries to the db, less memory to keep all the collections fetched in the app memory. So, if lazy=false, then everything is fetched, all associated collections, that is not effectively, if there are 10,000 records in a linked set. Now, I have a method in a DAO class that is supposed to return me a User object. It has collections that represent linked tables of the database. I need to get a object by id and then query its collections. Hibernate "failed to lazily initialize a collection" exception occurs when I try to access the linked collection that this DAO method returns. Explain please, what is a workaround here?

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Clustering and DB Replication in virtualized (and cloud) environments

    - by devdude
    Both replication and clustering are terms for servers setups with physical (real) servers, usually implemented on a DB or AS level. Now the question: In a virtualized environment with "easy" scalable servers (touching clustering) and a higher availability (DB replication) through the means of high availability of the virtual server by the cloudserver provider, do we really need replication and clustering (as in covering the problems of traditional servers) ? Question is asked from a soultion/application provider viewpoint. Please exclude the need of replication with a business requirement background, eg. the need to replicate a DB at 2 different geographical locations to ensure performance and data. Thanks for your insights !

    Read the article

  • How to build march-0 for different architectures?

    - by Victor Lin
    I have some dylibs to load from python with ctypes. I can load libbass.dylib without problem, but I can't load the self-compiled libmp3lame.dylib. Here is the error I get. OSError: dlopen(libmp3lame.dylib, 6): no suitable image found. Did find: libmp3lame.dylib: mach-o, but wrong architecture Then, I inspect the file type of those libs. Here is the result of libbass.dylib: libbass.dylib: Mach-O universal binary with 2 architectures libbass.dylib (for architecture i386): Mach-O dynamically linked shared library i386 libbass.dylib (for architecture ppc): Mach-O dynamically linked shared library ppc And here is the self-compiled one: libmp3lame.dylib: Mach-O 64-bit dynamically linked shared library x86_64 I did compile the lame library with the install instructions: ./configure make make install I'm new to mac system, here comes the problem: how to build the libmp3lame.dylib so that it supports different architecture I want? Thanks.

    Read the article

  • How to build mach-0 for different architectures?

    - by Victor Lin
    I have some dylibs to load from python with ctypes. I can load libbass.dylib without problem, but I can't load the self-compiled libmp3lame.dylib. Here is the error I get. OSError: dlopen(libmp3lame.dylib, 6): no suitable image found. Did find: libmp3lame.dylib: mach-o, but wrong architecture Then, I inspect the file type of those libs. Here is the result of libbass.dylib: libbass.dylib: Mach-O universal binary with 2 architectures libbass.dylib (for architecture i386): Mach-O dynamically linked shared library i386 libbass.dylib (for architecture ppc): Mach-O dynamically linked shared library ppc And here is the self-compiled one: libmp3lame.dylib: Mach-O 64-bit dynamically linked shared library x86_64 I did compile the lame library with the install instructions: ./configure make make install I'm new to mac system, here comes the problem: how to build the libmp3lame.dylib so that it supports different architecture I want? Thanks.

    Read the article

  • How to Serialize Binary Tree

    - by Veljko Skarich
    I went to an interview today where I was asked to serialize a binary tree. I implemented an array-based approach where the children of node i (numbering in level-order traversal) were at the 2*i index for the left child and 2*i + 1 for the right child. The interviewer seemed more or less pleased, but I'm wondering what serialize means exactly? Does it specifically pertain to flattening the tree for writing to disk, or would serializing a tree also include just turning the tree into a linked list, say. Also, how would we go about flattening the tree into a (doubly) linked list, and then reconstructing it? Can you recreate the exact structure of the tree from the linked list? Thank you/

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • Exporting Excel to sql server Temporary table.

    - by Renju
    I need to take all the values in an excel file to a temporary/physical table in SQL Server 2005. I don't need the Import export method. I tried the following linked server method: SELECT * INTO db1.dbo.table1 FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Driver={Microsoft Excel Driver (*.xls)};DBQ=c:\renju.xls', 'SELECT * FROM [sheet1$]') But it is returing an error as: OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "Could not find installable ISAM.". I'm using Excel 2003, and I've already added the linked server for "Microsoft.Jet.OLEDB.4.0."

    Read the article

  • What is the most EVIL code you have ever seen in a production enterprise environment?

    - by Registered User
    What is the most evil or dangerous code fragment you have ever seen in a production environment at a company? I've never encountered production code that I would consider to be deliberately malicious and evil, so I'm quite curious to see what others have found. The most dangerous code I have ever seen was a stored procedure two linked-servers away from our core production database server. The stored procedure accepted any NVARCHAR(8000) parameter and executed the parameter on the target production server via an double-jump sp_executeSQL command. That is to say, the sp_executeSQL command executed another sp_executeSQL command in order to jump two linked servers. Oh, and the linked server account had sysadmin rights on the target production server.

    Read the article

  • [C#] What does MissingManifestResourceException mean and how to fix it?

    - by Timwi
    The situation: I have a class library, called RT.Servers, containing a few resources (of type byte[], but I don't think that's important) The same class library contains a method which returns one of those resources I have a simple program (with a reference to that library) that only calls that single method I get a MissingManifestResourceException with the following message: Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Servers.Resources.resources" was correctly embedded or linked into assembly "RT.Servers" at compile time, or that all the satellite assemblies required are loadable and fully signed. I have never played around with cultures, or with assembly signing, so I don't know what's going on here. Also, this works in another project which uses the same library. Any ideas? Edit: I checked the .resx file; all the resources are marked as "Culture=neutral" there. Also, I noticed a similar question and went to check the namespace in Resources.Designer.cs, but it's correct (it's "RT.Servers").

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Criteria for selecting software for embedded device

    - by Suresh Kumar
    We are currently evaluating Web servers for an embedded device. We have laid down the evaluation criteria for things like HTTP version, Security, Compression etc. On the embeddable side, we have identified the following criteria: Memory footprint Memory management (support for plugging in a custom memory manager) CPU usage Thread usage (support for thread pool) Portability What I want inputs on is: Are there any other criteria that an embeddable software should meet? What exactly does it mean when someone says that a software is designed for embeddable use? We currently have zeroed in on two Web servers: AppWeb Lighttpd (lighty) Feature wise, both the above Web servers seem to be on par. However, it is claimed that AppWeb is designed for embedded use while Lighttpd is not. To choose between the above two Web servers, what criteria should I be looking at?

    Read the article

  • Create CRM Organizations on Load Balancing network

    - by user82613
    I'm trying to understand how to create CRM Organization on Load Balancing network. I've three web servers (Web01, Web 02, Web03); three application servers (App01, App02, App03) and a SQL Server (SQL01). I already have Load Balancer setup and there is already one organizaiton setup by someone on all web servers. This organization is Internet Facing. Now I want to create one more Organization on same set of Web Servers. Can anyone please help me understand how to setup new Organization on Load Balancer in this scenario?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >