Search Results

Search found 30046 results on 1202 pages for 'document load'.

Page 19/1202 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Cannot select/edit TineMCE-generated table

    - by Rakward
    I'm currently using TinyMCE edit in my drupal-website, problem is that beneath the editor, some of the table is sticking out. If I remove the height set by javascript with firebug, it looks fine, even after resizing. So I want to remove the height with JS, I've put this function at the end of my page: $('table#edit-body_tbl').removeAttr('style'); However nothing happens. I test the function in firebug's console, it works perfectly. Basically, the problem is the JS works, but it wont do anything if I simply load it at the end of the page, even in the document.ready function. The TineMCE script is loaded before my script so I should be able to select/edit/delete elements generated by it no? Does anybody know why or how I can force the page to really load my function in the end(currently it is right in front of the -tag)? Other functions in the script work, except this thing ...

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • Load Test Manifesto

    - by jchang
    Load testing used to be a standard part of the software development, but not anymore. Now people express a preference for assessing performance on the production system. There is a lack of confidence that a load test reflects what will actually happen in production. In essence, it has become accepted that the value of load testing is not worth the cost and time, and perhaps whether there is any value at all. The main problem is the load test plan criteria – excessive focus on perceived importance...(read more)

    Read the article

  • IIS load balancing and site deployment

    - by KLC
    Hi, currently I have a site sits on one IIS7 server. When we deploy a new version of the site, we bring the site down and display an offline page. What I really want is have two same exact copies of the site sits in one IIS 7 server and load balance users among both servers. when we deploy a new version of the site, we will bring site1 down (users in site1 automatically routes to site2 on next postback), when site1 deployment is complete, bring site2 down (users in site2 being routes to site1 on next postback). is this even possible?

    Read the article

  • how to create simulator for web application for load test and stress test

    - by girish
    i m developing a web application but...now i need to create simulator for the same...that will be able to re-run the process that has been done on website... let's say i m developing a auction site where user's bid on product.... during these process the number of user's bid on the same product and at the end one user buy the product... now what i want is.. i want to record this process or any thing so that i can run the process for the same again so that i can test the load and the stress on web application and the database server.. Thank you.

    Read the article

  • Use AJAX to load webpage content into DIV

    - by arik-so
    Hello. I have a text field. When I type something into than text field, I want to load stuff into the DIV below. That's my code equivalent: <input type="text" onkeyup="/* I am not sure, how it's done, but I'll just write: */ document.getElementById('search_results').innerHTML = getContents('search.php?query='+this.value);" /><br/> <div id="search_results"></div> Hope you can help. Thanks in advance! EDIT: I would appreciate it if the solution did not involve using jQuery - as long as it's possible.

    Read the article

  • load/unload C dll with C# problems

    - by Goran
    I'm having problems with external native DLL. I'm working on ASP.NET 1.1 web application and have this DLL which I load through DLLImport directives. This is how I map DLL functions: [DllImport("somedllname", CallingConvention=CallingConvention.StdCall)] public static extern int function1(string lpFileName,string lpOwnerPw,string lpUserPw); [DllImport("somedllname", CallingConvention=CallingConvention.StdCall)] public static extern int function2(int nHandle); I call the dll methods and all works great, but I have problems with this DLL crashing my web site on some cases, so I would like an option to unload the dll after I use it. I found a solution at this link, but I don't have 'UnmanagedFunctionPointer' attribute in .NET 1.1 available. http://blogs.msdn.com/jonathanswift/archive/2006/10/03/Dynamically-calling-an-unmanaged-dll-from-.NET-_2800_C_23002900_.aspx Is there a way I can achieve what this guy did with his example?

    Read the article

  • Load balancing, connection loop

    - by Iapilgrim
    Hi all, I've set up load balancing using Apache, Tomcat through ajp connector. Here the config Apache/httpd.conf BalancerMember ajp://10.0.10.13:8009 route=osmoz-tomcat-1 retry=5 BalancerMember ajp://10.0.10.14:8009 route=osmoz-tomcat-2 retry=5 < Location / Allow From All ProxyPass balancer://tomcatservers/ stickysession=JSESSIONID nofailover=off ProxyPassReverse / < /Location When I try to access https://mycms.net/apps/bo/signin I see the connection loop forever ( redirection forever). I don't know why. This happens sometimes. So my question are: Is there any way Apache make connection loop? Is there a tool help me to monitor the connection? My question may not clear but I don't have any glue right now. Thanks

    Read the article

  • Load testing a quicktime streaming server from ubuntu machine

    - by ebeland
    I have software that can launch and control multiple firefox browsers on Ubuntu EC2 images. I need to run a small load test against a QuickTime Streaming server. The stream starts automatically when loaded in a browser that has the QuickTime plugin, so I don't need to automate the stream once it starts. Alternately, I can also make these machines run arbitrary ruby code or executables. How can I get these ubuntu machines to pull in the stream? Also, how can I capture bandwidth usage (maybe a shell script?) on the worker machines?

    Read the article

  • MySQL LOAD DATA LOCAL INFILE example in python?

    - by David Perron
    I am looking for a syntax definition, example, sample code, wiki, etc. for executing a LOAD DATA LOCAL INFILE command from python. I believe I can use mysqlimport as well if that is available, so any feedback (and code snippet) on which is the better route, is welcome. A Google search is not turning up much in the way of current info The goal in either case is the same: Automate loading hundreds of files with a known naming convention & date structure, into a single MySQL table. David

    Read the article

  • jquery load method

    - by cognos79
    I am using mscaptcha image and trying to only reload the captcha image on button click. Seems like the captcha image is being loaded twice sometimes. <script type="text/javascript"> $(document).ready(function() { $('#btnSubmit').click(function() { $('#dvCaptcha').load('default.aspx #dvCaptcha'); }); }); </script> <div id="btnDiv"> <asp:Button ID="btnSubmit" runat="server" Text="Submit" /> </div> <div id="dvCaptcha"> <cc1:CaptchaControl ID="ccJoin" runat="server" CaptchaBackgroundNoise="none" CaptchaLength="5" CaptchaHeight="60" CaptchaWidth="200" CaptchaLineNoise="None" CaptchaMinTimeout="5" CaptchaMaxTimeout="240" /> </div> Any ideas?

    Read the article

  • How does load balancing work with multiple server with multiple DBs

    - by Matt
    I guess what im looking for is a description on how this all works together. I'm used to setting up one server with maybe another server to handel the DB. My question is how does the load balancer work where do all the script(php,python) files go? If i make a change to one i have to rsync them to all the server that the balancer refers to? Also does each server need client side DB's installed so they can reference the DB's that are on other servers? If there is a site that explains all this i would be happy to read it.

    Read the article

  • jquery load links not clickable

    - by john morris
    ok i am loading a separate page with links in it, into a page named index.php. it loads fine, but when i click one of the links inside the loaded content, nothing happens. they dont act as links. but if i do a alert('hi'); after the load('page.html'); then it will work. any ideas on getting this to work without alerting something after it loads? oh also i cant use a callback, unless there is a way to update the get variable because the page loading, has a $_GET variable, and the links inside the loaded page are supposed to update the $_GET variable. anyways is there a way to make the links clickable after loading the page?

    Read the article

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • what is value of x for load and store

    - by Kevinniceguy
    This is some challenge On a single processor system, in which load and store are assumed to be atomic, what are all the possible values for x after both threads have completed in the following execution, assuming that x is initialised to O? Hint: you need to consider how this code might be compiled into machine language. for (int i = 0; i < 5; i++) : x = x + 1; for (int j = 0; j < 5; j++) : x = x + 1;

    Read the article

  • Fetch and load a whole page with AJAX

    - by nimo
    I realize that it's generally not a good idea to do this but the reason why I want to is because it's a really heavy page and I want to show the user progress of the download and when it's done load the page. I can either do this with some kind of spinner but is it possible for me to show the actual progress? Can I see how much and what data has been downloaded? Let's say I use jQuery for the AJAX-request how do I do this? If you have other suggestions please feel free to suggest.

    Read the article

  • load data only when slide down with jquery

    - by hd
    I have a link that when user clicks on it some data loaded into a div. I want to display waiting message to user until the data fetch completely from server and then show it to user. I also want to call the data url happen only when the result box is going down(slide down) not in both of slide down and slide up events to reduce load on server. my code is here but i don't know how implement second part into it: $(document).ready(function() { $("#prereq").hide(); $("#prereqlink").click(function() { $("#prereq").html("please wait ..."); $("#prereq").slideToggle("slow"); $.ajax({ url:"referee.php", success:function(data) { $("#prereq").html(data); } }); }); }); would you help me??

    Read the article

  • jquery .load() function only gets called once

    - by user1288099
    the html <div class="stackwrapper" id="user1"></div> <div class="stackwrapper" id="user2"></div> <div class="userdrawings"></div> the javascript $('.stackwrapper').click(function(e){ var id=$(this).attr('id'); $('.userdrawings').load('loadSession.php?user='+id).fadeIn("slow"); }); Somehow it only works at once, only at the first click on stackwrapper, when I click on the second one, the function is not triggered again.

    Read the article

  • .net load balancing for server

    - by user1439111
    Some time ago I wrote server software which is currently running at it's max. (3k users average). So I decided to rewrite certain parts so I can run the software at another server to balance it's load. I can't simply start another instance of the server since there is some data which has to be available to all users. So I was thinking of creating a small manager and all the servers connect and send their (relevant)data to the manager. But it also got me thinking about another problem. The manager could also reach it's limits which is exactly what i'm trying to prevent in the future. So I would like to know how I could fix this problem. (I have already tried to optimize critical parts of the software but I can't optimize it forever)

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

  • Auto load a specific link at various time intervals

    - by user228837
    Here's what I need to do. I'm using Google Chrome. I have page that auto-reloads every 5 seconds using a script: javascript: timeout=prompt("Set timeout [s]"); current=location.href; if(timeout>0) setTimeout('reload()',1000*timeout); else location.replace(current); function reload() { setTimeout('reload()',1000*timeout); fr4me='<frameset cols=\'*\'>\n<frame src=\''+current+'\'/>'; fr4me+='</frameset>'; with(document){write(fr4me);void(close())}; } I found that script by Googling. The reason why the page auto-reloads every 5 seconds is I'm waiting for a specific link or url to appear in the page. It appears at random times. Once I see the link I'm waiting for, I immediately click the link. That's fine. But I want more. What I want is the page will auto-reload and I want it to auto-detect the the link I'm waiting for. Once the script finds the link I'm waiting for, it automatically loads that link on a new tab or page. For example, I'm auto-reloading www.example.com. I'm waiting for a specific url "BUY NOW". When the page auto-reloads, it checks if there's a url "BUY NOW". If it sees one, it should automatically open that link. Thanks.

    Read the article

  • Microsoft Word Document Recovery seems unresolvable

    - by LarsTech
    We work on a server (Windows Server 2003 Enterprise) that a bunch of us developers remote in using Remote Desktop Connection, and now every "other" time I open up word I get this Document Recovery pane: As you can see, "Delete" and "Show Repairs" are disabled (I don't think this is my document). "Open" or "Save As" works, but it never marks this document recovered, it just keeps coming up every "other" time I open up word. I don't even care about this document. How can I remove this document from the list so that it doesn't come up anymore? Update: As per Moab's comment, I had the user delete the file. But the Document Recovery still shows up with that item every other time Word is opened. The only difference now is that clicking on "Open" produces this (obvious) message: How can I clear this list? BTW, the user that owned this document was never getting the Document Recovery pane.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >