Search Results

Search found 7913 results on 317 pages for 'resource caching'.

Page 45/317 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • How to add resource files during build on J2ee (eclipse + jboss)

    - by legendlink
    hi, i'm trying to run a web servlet project in eclipse 3.4 using jboss 4.2.2 as my web server. im using the wtp plugin and everything looks good (can run and debug). but some of the files/resources are not included on the war file. in my "WebContent/WEB-INF" folder, i have "properries", "config", and "lib" folders. but it seems like when i build and publish the project, only "config" and "lib" folders are included. how can i include the "properties" file during build?

    Read the article

  • Visual studio 2010: cannot connect for any online resource

    - by KentZhou
    Visual studio 2010 Premium installed on Widnows 7: when try to connect for online gallery or online template, give me error like: Cannot search for online exensions because an error occured while trying to contact the server and ask me enable access to extensions on the vistual studio gallery .... It did enabled in Extension Manager Tools/Options page. Internet connection was fine. Can't know why. On another computer: Windows 7, VS2010 utilmate trail version: this was fine.

    Read the article

  • Caching Mysql database for better performance

    - by kobey
    Hi, I'm using Amazon cloud and I've performance issue since the HDD is not located on my machine. My database is small (~500MB) and I can afford to keep it all in my RAM. I do not want to keep queries in my RAM, i need all the tables there. How can i do it? Thanks, Koby P.S. I'm using ubuntu server...

    Read the article

  • Django database caching

    - by hekevintran
    I have a Django form that uses an integer field to lookup a model object by its primary key. The form has a save() method that uses the model object referred to by the integer field. The model's manager's get() method is called twice, once in the clean method and once in the save() method: class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is the first call to get MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): id_a = self.cleaned_data['id_a'] # here is the second call to get my_model_object = MyModel.objects.get(id=id_a) # do other stuff I wasn't sure whether this hits the database two times or one time so I returned the object itself in the clean method so that I could avoid a second get() call. Does calling get() hit the database two times? Or is the object cached in the thread? class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is my workaround return MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): # looking up the cleaned value returns the model object my_model_object = self.cleaned_data['id_a'] # do other stuff

    Read the article

  • How do I 'donut cache' in ASP.NET MVC for something more than a date

    - by Simon_Weaver
    All the examples for donut caching I've seen are just like this : <%= Html.Substitute( c => DateTime.Now.ToString() )%> Thats fine if I just want the date, but what other options are there? I know there is a delegate 'MvcSubstitutionCallback' which has the following signature : public delegate string MvcSubstitutionCallback(HttpContextBase httpContext); but RenderAction and RenderPartial returns void so i cant just return them from the delegate method. How can I effectively use this callback for more complex situations. I've looked at both of Phil Haacked's articles here and here, but neither seems to do exactly what I want.

    Read the article

  • XmlDocument caching memory usage

    - by mdsharpe
    We are seeing very high memory usage in .NET web applications which use XmlDocument. A small (~5MB) XML document is loaded into an XmlDocument object and stored in HttpContext.Cache for easy querying and XSLT transformation on each page load. The XML is modified on disk periodically so a cache has a dependency on the file. Such an application appears to be using hundreds of megabytes of RAM. I have experimented with requesting garbage collection on each request start, and this keeps the RAM usage far lower but I cannot imagine this is good practise. Does anyone have any suggestions as to how we can achieve the same goal but with lower RAM usage?

    Read the article

  • How to Cache image when src is some action which returns image?

    - by Bipul
    There are lots of questions about how to force the browser to cache or not to cache any image. But, I am facing slightly different situation. In several places of my web page, I am using following code for the images. <img title="<%= Html.Encode(Model.title)%>" src="<%= Url.Action(MVC.FrontEnd.Actions.RetrieveImage(Model.SystemId))%>"/> So, in the generated HTML it is like <img title="blahblah" src="http://xyz.com/FrontEnd/Actions/RetrieveImage?imageId=X"> Where X is some integer. I have seen that though the browser (IE or Mozilla) caches images by default, it is not caching images generated by above method. Is there any way I can tell browser to cache images of above type? Thanks in advance.

    Read the article

  • DirectX Resource Leak

    - by srand
    At the end of my DirectX application I get "The Direct3D device has a non-zero reference count, meaning some objects were not released.". The application is large and not written by me, how can I go about debugging what resources are not being released?

    Read the article

  • What is the best way to implement an object cache with Entity Framework?

    - by Harshal
    Say I have a table of "BlogPosts" in a database and i want to be able to cache the ones that were retrieved already in memory, for further reads, I can just use a standard hashtable type memory cache like System.Web.Caching.Cache, but if i then need to update a property on one of these blog posts e.g. blogPost.Title and update the record in DB, i cannot do this without fetching it again from database as the Entity Framework context used to fetch this record when it was loaded into my cache is already disposed? How do I write code so that I am getting an object from my cache, updating one property and just calling the SaveChanges method without incurring an extra read.

    Read the article

  • Preloading and caching of images in silverlight

    - by Prabhjot Singh
    Hi there I have a silverlight application in vs2010 and iam using silverlight 4.0. I have to show a videoppt in which a video is synchronised with images and it runs as a video powerpoint presentation. Is it possible to preload the images or cache them, so that they get rendered as soon as the video starts. If there is a way out, plz guide me.

    Read the article

  • getResourceAsStream not loading resource in webapp

    - by mangst
    I have a web application that uses a library which resides in TOMCAT_HOME/common/lib. This library looks for a properties file at the root of the classpath (in a class called ApplicationConfig): ApplicationConfig.class.getResourceAsStream("/hv-application.properties"); My Tomcat web application contains this properties file. It is in WEB-INF/classes, which is the root of the classpath right? However, at runtime, when it tries to load the properties file, it throws an exception because it can't find it (getResourceAsStream returns null). Everything works fine if my application is a simple, standalone Java application. Does Tomcat cause the getResourceAsStream method to act differently? I know there's a lot of similar questions out there, but none of them have helped unfortunately. Thanks.

    Read the article

  • WPF/.NET data access models - resource recommendations

    - by jasonk
    We're in the early design/prep phases of transferring/updating a rather large "legacy" 3 tier client-server app to a new version. We’re looking at doing WPF over Winforms as it appears to be the direction Microsoft is pushing development of the future and we’d like the maximize the life cycle/span of the apps. That said during the rewrite we’d like to make as many changes to our data access/presentation model to improve performance as much as possible up front as many. I’ve been doing some research along that vein but the vast majority of the resources I've found that discuss WPF focus only simple data tracking apps or focus on the very basics UI design/controls. The few items that even discuss data presentation are fairly elementary in depth. Are there any books/articles/recommended reading/other resources recommended for development related to large enterprise level business apps? Any “gotchas” that should/could be avoided? General advice to minimize the time underwater

    Read the article

  • Database caching on a shared host

    - by tau
    Anyone have any ideas how to increase MySQL performance on a shared host? My question has less to do with overall database performance and more to do with simply retrieving user-submitted data. Currently my database will create caches at timed intervals, and then the PHP will selectively access the static files it needs. This has given me a noticeable performance boost, but I am worried about a time in which I have so much data that having to read in big files in PHP will actually be slower. I am just looking for ideas for shared hosting solutions; I am not going to get my own server anytime soon. Thanks!

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • PHP returns invalid MySQL resource

    - by DeadMG
    $LDATE = '#' . $_REQUEST['LDateDay'] . '/' . $_REQUEST['LDateMonth'] . '/' . $_REQUEST['LDateYear'] . '#'; $RDATE = '#' . $_REQUEST['RDateDay'] . '/' . $_REQUEST['RDateMonth'] . '/' . $_REQUEST['RDateYear'] . '#'; include("../../sql.php"); $myconn2 = mysql_connect(/*removed*/, $username, $password); mysql_select_db(/*removed*/, $myconn2); $LSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$LDATE; $LFlights = mysql_query($LSQLRequest, $myconn2); $RSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$RDATE; $RFlights = mysql_query($RSQLRequest, $myconn2); Assuming that all the $_REQUESTs are valid numerical values for their appropriate fields in the day/month/year field, how can LFlights and RFlights be invalid? When I polled the whole database I got hundreds of results so I know that the database and connection data is fine, and the field DepartureDate exists too.

    Read the article

  • Tutorial/resource for implementing VM

    - by zaharpopov
    Hello, I want self-education purpose implement a simple virtual machine for a dynamic language, prefer in C language. Something like the Lua VM, or Parrot, or Python VM, but simpler. Are there any good resources/tutorials on achieving this, apart from looking at code and design documentations of the existing VMs? Thanks in advance for your answers and ideas Edit: why close vote? I don't understand - is this not programming. Please comment if there is specific problem with my question.

    Read the article

  • caching images that are retrieved

    - by Rahul Varma
    Hi, I am retrieving a list of images and text from a web service. The images are getting displayed in the list view. But the problem is, when i scroll down the list the entire images are getting loaded once again. When i scroll twice or thrice there is a OutofMemory occuring... Can anyone tell me how to cache the images and also to avoid the loading of the images that are already loaded when i scroll down. I have tried to increase inSampleSize but it didnt work... Here's the code.... public static Bitmap loadImageFromUrl(String url) { InputStream inputStream; Bitmap b; try { inputStream = (InputStream) new URL(url).getContent(); BitmapFactory.Options bpo= new BitmapFactory.Options(); b=BitmapFactory.decodeStream(inputStream, null,bpo ); b.recycle(); bpo.inSampleSize=2; return b; } catch (IOException e) { throw new RuntimeException(e); } // return null; }

    Read the article

  • Is asp.net caching my sql results?

    - by Christian W
    I have the following method in an App_Code/Globals.cs file: public static XmlDataSource getXmlSourceFromOrgid(int orgid) { XmlDataSource xds = new XmlDataSource(); var ctx = new SensusDataContext(); SqlConnection c = new SqlConnection(ctx.Connection.ConnectionString); c.Open(); SqlCommand cmd = new SqlCommand(String.Format("select orgid, tekst, dbo.GetOrgTreeXML({0}) as Subtree from tblOrg where OrgID = {0}", orgid), c); var rdr = cmd.ExecuteReader(); rdr.Read(); StringBuilder sb = new StringBuilder(); sb.AppendFormat("&lt;node orgid=\"{0}\" tekst=\"{1}\"&gt;",rdr.GetInt32(0),rdr.GetString(1)); sb.Append(rdr.GetString(2)); sb.Append("&lt;/node&gt;"); xds.Data = sb.ToString(); xds.ID = "treedata"; rdr.Close(); c.Close(); return xds; } This gives me an XML-structure to use with the asp.net treeview control (I also use the CssFriendly extender to get nicer code) My problem is that if I logon on my pc with a code that gives me access on a lower level in the tree hierarchy (it's an orgianization hierarchy), it somehow "remembers" what level i logon at. So when my coworker tests from her computer with another code, giving access to another place in the tree, she get's the same tree as me. (The tree is supposed to show your own level and down.) I have added a html-comment to show what orgid it passes to the function, and the orgid passed is correct. So either the treeview caches something serverside, or the sqlquery caches it's result somehow... Any ideas? Sql function: ALTER function [dbo].[GetOrgTreeXML](@orgid int) returns XML begin RETURN (select org.orgid as '@orgid', org.tekst as '@tekst', [dbo].GetOrgTreeXML(org.orgid) from tblOrg org where (@orgid is null and Eier is null) or Eier=@orgid for XML PATH('NODE'), TYPE) end Extra code as requested: int orgid = int.Parse(Session["org"].ToString()); string orgname = context.Orgs.Where(q => q.OrgID == orgid).First().Tekst; debuglit.Text = String.Format("<!-- Id: {0} \n name: {1} -->", orgid, orgname); var orgxml = Globals.getXmlSourceFromOrgid(orgid); tvNavtree.DataSource = orgxml; tvNavtree.DataBind(); Where "debuglit" is a asp:Literal in the aspx file. EDIT: I have narrowed it down. All functions returns correct values. It just doesn't bind to it. I suspect the CssFriendly adapter to have something to do with it. I disabled the CssFriendly adapter and the problem persists... Stepping through it in debug it's correct all the way, with the stepper standing on "tvNavtree.DataBind();" I can hover the pointer over the tvNavtree.Datasource and see that it actually has the correct data. So something must be faulting in the binding process...

    Read the article

  • Caching the repository index in m2eclipse

    - by Titi Wangsa bin Damhore
    everytime i start with a fresh new workspace, m2eclipse downloads nexus-maven-repository-index.gz from the maven central repository. this is good. but, some times, i just want to start a new workspace, and not wait for it to download, it tried copying the whole .metadata directory from an old workspace to the new one, but the list of maven artifacts are still empty. is there a way i can cache it? or at least download the file once, and the copy/extract/repackage it so that m2eclipse thinks it has already downloaded it and allows me to search for maven artifacts. or a short version of the question where and in what format is the "nexus-maven-repository-index.gz" file stored in the workspace?

    Read the article

  • Resource.h in Windows API simple app

    - by nXqd
    there'are these lines in the sample Win32 app created default by VS. Can you explain why they're just numbers, and it's meaning :) //{{NO_DEPENDENCIES}} // Microsoft Visual C++ generated include file. // Used by Testing Project.rc // #define IDS_APP_TITLE 103 #define IDR_MAINFRAME 128 #define IDD_TESTINGPROJECT_DIALOG 102 #define IDD_ABOUTBOX 103 #define IDM_ABOUT 104 #define IDM_EXIT 105 #define IDI_TESTINGPROJECT 107 #define IDI_SMALL 108 #define IDC_TESTINGPROJECT 109 #define IDC_MYICON 2 #ifndef IDC_STATIC #define IDC_STATIC -1 #endif // Next default values for new objects // #ifdef APSTUDIO_INVOKED #ifndef APSTUDIO_READONLY_SYMBOLS #define _APS_NO_MFC 130 #define _APS_NEXT_RESOURCE_VALUE 129 #define _APS_NEXT_COMMAND_VALUE 32771 #define _APS_NEXT_CONTROL_VALUE 1000 #define _APS_NEXT_SYMED_VALUE 110 #endif #endif

    Read the article

  • Use .js files for caching large dropdown lists.

    - by ProfK
    I would like to keep the contents of large UI lists cached on the client, and updated according to criterial or regularly. Client side code can then just fill the dropdowns locally, avoiding long page download times. How can I go about this? I mean, what patterns and strategies would be suitable for this?

    Read the article

  • A resource for installation or setup folder hierarchy of various applications

    - by Luay
    Is there a web site (or something else) that has information on folders and their hierarchy where various programs and applications are installed? Please let me explain. I am writing an application that (part of what it does) references certain files installed by other application. To be able to determine the file paths for these folders I have to download and install each application seperatly on my development pc, search for the file I want, and then write its path in my application. This method is very time consuming and, frankly, boring, as it requires downloading and installing each application (some of them in excess of 600MB) and then locating the required file just to be able to "know" its path. So, I was wondering if there is something that could speed things up. like, for example, a website that would have such information. I tried each of the applications own website for information but no dice. Any help from you will be much appreciated. Many thanks

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >