Search Results

Search found 22717 results on 909 pages for 'load operation'.

Page 126/909 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Is the load order for external javascript files different in IE8 compared to IE7?

    - by Benny
    Hello, I ask because I'm running an application in which I load an external script file in the HEAD section of the page, and then attempt to call a function from it in the onLoad section of the BODY tag. external.js function someFunction() { alert("Some message"); } myPage.html <html> <head> <script type="text/javascript" language="javascript" src="external.js"></script> </head> <body onLoad="someFunction();"> </body> </html> Using the developer tools in IE8, I get an exception thrown at the onLoad statement because, apparently, the external javascript file hasn't been loaded yet. I haven't had this problem come up in IE7 before, thus my question. Did they change the load order between IE7 and IE8? If so, is there a better way to do this? (the real function references many other functions and constants, which look much better in an external file) Thanks, B.J.

    Read the article

  • NHibernate Pitfalls: Loading Foreign Key Properties

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. When saving a new entity that has references to other entities (one to one, many to one), one has two options for setting their values: Load each of these references by calling ISession.Get and passing the foreign key; Load a proxy instead, by calling ISession.Load with the foreign key. So, what is the difference? Well, ISession.Get goes to the database and tries to retrieve the record with the given key, returning null if no record is found. ISession.Load, on the other hand, just returns a proxy to that record, without going to the database. This turns out to be a better option, because we really don’t need to retrieve the record – and all of its non-lazy properties and collections -, we just need its key. An example: 1: //going to the database 2: OrderDetail od = new OrderDetail(); 3: od.Product = session.Get<Product>(1); //a product is retrieved from the database 4: od.Order = session.Get<Order>(2); //an order is retrieved from the database 5:  6: session.Save(od); 7:  8: //creating in-memory proxies 9: OrderDetail od = new OrderDetail(); 10: od.Product = session.Load<Product>(1); //a proxy to a product is created 11: od.Order = session.Load<Order>(2); //a proxy to an order is created 12:  13: session.Save(od); So, if you just need to set a foreign key, use ISession.Load instead of ISession.Get.

    Read the article

  • How to load static files from view HTML in web2py?

    - by MikeWyatt
    Given a view with layout, how can I load static files (CSS and JS, essentially) into the <head> from the view file? layout.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="{{=T.accepted_language or 'en'}}"> <head> <title>{{=response.title or request.application}}</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <!-- include requires CSS files {{response.files.append(URL(request.application,'static','base.css'))}} {{response.files.append(URL(request.application,'static','ez-plug-min.css'))}} --> {{include 'web2py_ajax.html'}} </head> <body> {{include}} </body> </html> myview.html {{extend 'layout.html'}} {{response.files.append(URL(r=request,c='static',f='myview.css'))}} <h1>Some header</h1> <div> some content </div> In the above example, the "myview.css" file is either ignored by web2py or stripped out by the browser. So what is the best way to load page-specific files like this CSS file? I'd rather not stuff all my static files into my layout.

    Read the article

  • Generic and type safe I/O model in any language

    - by Eduardo León
    I am looking for an I/O model, in any programming language, that is generic and type safe. By genericity, I mean there should not be separate functions for performing the same operations on different devices (read_file, read_socket, read_terminal). Instead, a single read operation works on all read-able devices, a single write operation works on all write-able devices, and so on. By type safety, I mean operations that do not make sense should not even be expressible in first place. Using the read operation on a non-read-able device ought to cause a type error at compile time, similarly for using the write operation on a non-write-able device, and so on. Is there any generic and type safe I/O model?

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • How to load an HTML file into an included PHP file from another PHP?

    - by Peter NGM
    i have 2 PHP file and 1 HTML. i want to include file2.php in file1.php. file1.php is: <html> <head>...</head> <body> ... <?php include("file2.php"); ?> ... </body> </html> i want to load an HTML file in file2.php: file2.php is: <?php $doc = new DOMDocument(); $doc->loadHTMLFile("sample.html"); echo $doc->saveHTML(); ?> and sample.html is: ... <b>Hello World!</b> ... my problem is this error: Warning: DOMDocument::loadHTMLFile() [domdocument.loadhtmlfile]: I/O warning : failed to load external entity "sample.html" in C:\xampp\htdocs\project\file2.php on line 3 please help me to solve this problem.

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • What is the best way to "carve" a terrain created from a heightmap?

    - by tigrou
    I have a 3d landscape created from a heightmap. I'd like to "carve" some holes in that terrain. That will allow me to create bridges, caverns and tunnels inside it. That operation will be done in the game editor so it doesn't need to be realtime. In the end, rendering is done using traditional polygons. What would be the best/easiest way to do that ? I already think about several solutions : Solution 1 1) Create voxels from the heightmap (very easy). In other words, fill a 3D array like this : voxels[32][32][32] from the heightmap values. 2) Carve holes in the voxels as i want (easy too). 3) Convert voxels to polygons using some iso-surface extraction technique (like marching cubes). 4) Reduce (decimate) polygons created in 3). This technique seems to be the most promising for giving good results (untested). However the problem with marching cubes is that they tends to produce lots of polygons thus reducing them is mandatory. Implementing 4) also seems not trivial, i have read several papers on the web and it seems pretty complex. I was also unable to find an example, code snippet or something to start writing an algorithm for triangle mesh decimation. Maybe there is a special decimation algorithm (simpler) for meshes created from marching cubes ? Solution 2 1) Create some triangle mesh from the heighmap (easy). 2) Apply severals 3D boolean operation (eg: subtraction with a sphere) to carve the mesh. 3) apply some procedure to reduce polygons (optional). Operation 2) seems to be very complex and to be honest i have no idea how to do that. Also applying many boolean operation seems to be slow and will maybe degrade the triangle mesh every time a boolean operation is applied.

    Read the article

  • User authentication using CodeIgniter

    - by marcin_koss
    I have a problem creating authentication part for my application. Below is the simplified version of my controllers. The idea is that the MY_controller checks if session with user data exists. If it doesn’t, then redirects to the index page where you have to log in. MY_controller.php class MY_Controller extends Controller { function __construct() { parent::__construct(); $this->load->helper('url'); $this->load->library('session'); if($this->session->userdata('user') == FALSE) { redirect('index'); } else { redirect('search'); } } } order.php - main controller class Orders extends MY_Controller { function __construct() { parent::__construct(); $this->load->helper('url'); $this->load->library('session'); } function index() { // Here would be the code that validates information input by user. // If validation is successful, it creates user session. $this->load->view('header.html', $data); // load header $this->load->view('index_view', $data); // load body $this->load->view('footer.html', $data); // load footer } function search() { //different page } what is happening is that the browser is telling me that “The page isn’t redirecting properly. Firefox has detected that the server is redirecting the request for this address in a way that will never complete.” It seems like the redirect() is being looped. I looked at a few other examples of user auth and they were build using similar technique.

    Read the article

  • Load XML file on seperate thread, overwrite old , and save.

    - by Luis Tovar
    Hey everyone... so Im trying to figure out how to do this. I have bounced around alot of forums to try and find my answer, but no success. Either their process it too complicated for me to understand, or is just overkill. What I am trying to do is this. I have an XML file within my app. Its a 500k xml file that i dont want the user to have to wait on when loading. SO... I put it in my app which kills the load time, and makes the app available offline. What i want to do is, when the app loads, in the background (seperate thread) download the SAME xml file which MIGHT be updated with new data. Once the xml file is complete, i want to REPLACE the xml file that was used to load the file. Any suggestions or code hints would be greatly appreciated. Thanks in advance!

    Read the article

  • Is it possible to load an assembly targeting a different .NET runtime version in a new app domain?

    - by Notre
    Hello, I've an application that is based on .NET 2 runtime. I want to add a little bit of support for .NET 4 but don't want to (in the short term), convert the whole application (which is very large) to target .NET 4. I tried the 'obvious' approach of creating an application .config file, having this: <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" /> </startup> but I ran into some problems that I noted here. I got the idea of creating a separate app domain. To test it, I created a WinForm project targeting .NET 2. I then created a class library targeting .NET 4. In my WinForm project, I added the following code: AppDomainSetup setup = new AppDomainSetup(); setup.ApplicationBase = "path to .NET 4 assembly"; setup.ConfigurationFile = System.Environment.CurrentDirectory + "\\DotNet4AppDomain.exe.config"; // Set up the Evidence Evidence baseEvidence = AppDomain.CurrentDomain.Evidence; Evidence evidence = new Evidence(baseEvidence); // Create the AppDomain AppDomain dotNet4AppDomain = AppDomain.CreateDomain("DotNet4AppDomain", evidence, setup); try { Assembly doNet4Assembly = dotNet4AppDomain.Load( new AssemblyName("MyDotNet4Assembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=66f0dac1b575e793")); MessageBox.Show(doNet4Assembly.FullName); } finally { AppDomain.Unload(dotNet4AppDomain); } My DotNet4AppDomain.exe.config file looks like this: <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" /> </startup> Unfortunately, this throws the BadImageFormatException when dotNet4AppDomain.Load is executed. Am I doing something wrong in my code, or is what I'm trying to do just not going to work? Thank you!

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • jquery: how can i load the Google Maps API via ajax?

    - by Haroldo
    Before you reply: this is not as straight foward as you'd expect! I have a 'show on map' button which when clicked opens a dialogbox/lightbox with the google map in. I don't want to load the maps api on pageload, just when a map has been requested This is php file the "show on map" button puts into the dialog box: <div id="map_canvas"></div> <script type="text/javascript"> $(function() { //google maps stuff var latlng = new google.maps.LatLng(<?php echo $coords ?>); var options = { zoom: 14, center: latlng, mapTypeControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById('map_canvas'), options); var marker = new google.maps.Marker({ position: new google.maps.LatLng(<?php echo $coords ?>), map: map }); }) </script> I've been trying to load the API before ajaxing in the dialog like this: $('img.map').click(function(){ var rel = $(this).attr('rel'); $.getScript('http://maps.google.com/maps/api/js?sensor=false', function(){ $.fn.colorbox({ href:rel }) }); }) this doesn't seem to work :( i've also tried: adding <script src="http://maps.google.com/maps/api/js?sensor=false"></script> to the ajax file type="text/javascript" running $.getScript('http://maps.google.com/maps/api/js?sensor=false'); on doc.ready the problem the browser seems to be redirected to the api.js file - you see a white screen

    Read the article

  • Synchronizing local and remote cache in distributed caching

    - by ltfishie
    With a distributed cache, a subset of the cache is kept locally while the rest is held remotely. In a get operation, if the entry is not available locally, the remote cache will be used and and the entry is added to local cache. In a put operation, both the local cache and remote cache are updated. Other nodes in the cluster also need to be notified to invalidate their local cache as well. What's a simplest way to achieve this if you implemented it yourself, assuming that nodes are not aware of each other. Edit My current implementation goes like this: Each cache entry contains a time stamp. Put operation will update local cache and remote cache Get operation will try local cache then remote cache A background thread on each node will check remote cache periodically for each entry in local cache. If the timestamp on remote is newer overwrite the local. If entry is not found in remote, delete it from local.

    Read the article

  • C# / XNA - Load objects to the memory - how it works?

    - by carl
    Hello I'm starting with C# and XNA. In the "Update" method of the "Game" class I have this code: t = Texture2D.FromFile( [...] ); //t is a 'Texture2D t;' which loads small image. "Update" method is working like a loop, so this code is called many times in a second. Now, when I run my game, it takes 95MB of RAM and it goes slowly to about 130MB (due to the code I've posted, without this code it remains at 95MB), then goes immediately to about 100MB (garbare colletion?) and again goes slowly to 130MB, then immediately to 100MB and so on. So my first question: Can You explain why (how) it works like that? I've found, that if I change the code to: t.Dispose() t = Texture2D.FromFile( [...] ); it works like that: first it takes 95MB and then goes slowly to about 101MB (due to the code) and remains at this level. I don't understand why it takes this 6MB (101-95)... ? I want to make it works like that: load image, release from memory, load image, release from memory and so on, so the program should always takes 95MB (it takes 95MB when image is loaded only once in previous method). Whats instructions should I use? If it is important, the size of the image is about 10KB. Thank You!

    Read the article

  • How to Load external Div that has a dynamic content using ajax and jsp/servlets ?

    - by A.S al-shammari
    I need to use ajax feature to load external div element ( external jsp file) into the current page. That JSP page contains a dynamic content - e.g. content that is based on values received from the current session. I solved this problem , but I'm in doubt because I think that my solution is bad , or maybe there is better solution since I'm not expert. I have three files: Javascript function that is triggered when a element is clicked, it requests html data from a servlet: $("#inboxtable tbody tr").click(function(){ var trID = $(this).attr('id'); $.post("event?view=read",{id:trID}, function(data){ $("#eventContent").html(data); // load external file },"html"); // type }); The servlet "event" loads the data and generates HTML content using include method : String id = request.getParameter("id"); if (id != null) { v.add("Test"); v.add(id); session.setAttribute("readMessageVector", v); request.getRequestDispatcher("readMessage.jsp").include(request, response); } The readMessage jsp file looks like this: <p> Text: ${readMessageVector[0]} </p> <p> ID: ${readMessageVector[1]} </p> My questions Is this solution good enough to solve this problem - loading external jsp that has dynamic content ? Is there better solution ?

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Rake db:migrate returns "rake aborted! no such file to load -- spec"

    - by Isaac Yerushalmi
    For some reason, out of no where, rails began giving me an error on "rake db:migrate", and I can no longer run migrations. It returns the error "no such file to load -- spec /home/ti/rails_apps/appname/Rakefile:10" I've spent two hours searching google for answers, trying to figure this out, but to no avail. What could be the problem? Here is the trace: -jailshell-3.2$ rake db:migrate --trace (in /home/ti/rails_apps/teamisrael) rake aborted! no such file to load -- spec /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' /home/ti/rails_apps/teamisrael/vendor/plugins/google-geocoder/tasks/rspec.rake:5 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7 /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7 /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/ti/rails_apps/teamisrael/Rakefile:10 /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2349:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2349:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1985:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1984:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1969:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1967:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/bin/rake:31 /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >