Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 137/157 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • Creating multiple instances of a generic database

    - by sagekilla
    Hi all, currently I'm trying to have a setup where a generic database is distributed to students. They would develop an application using this database (Say a shopping cart application), submit their project onto our server, and then it would be graded automatically. These databases are being run in Microsoft SQL Server 2005. We're using user instances to instantiate each database, and multiple requests could be serviced at once. But, the problem is when more than one student submitted a project to be graded, the first database to be instantiated would be the only one and would overwrite all other copies that were currently open. So if stu1 modified his database and stu2 and stu3 had their projects being graded concurrently, at the end of the grading stu1, stu2, and stu3 would have identical DB's at the end. Is there any way I can have multiple independent copies of a generic database, each of which I can load concurrently and modify without having any changes made to any one affecting the others? I did a little reading, and thought it might be possible to do something along the lines of: Student submits project Attach the database with unique db name (specified by student) Do all necessary operations Detach the database I'm unsure if this would fix our problem or be possible, so any help would be much appreciated!

    Read the article

  • Mysql - wondering about scaling a twitter-like application ?

    - by user246114
    Hi, I'm developing an app that is vaguely similar to twitter, in that it allows users to follow one another. I wanted to do this using google app engine, for its scalability promises, but it's proving kind of difficult to get running for a few different reasons. I'd basically like to have a _users table, and a _followers table. Users go into the users table, follower relationships go into _followers. The problem is that each row in the users table will probably have like 100 corresponding records in the _followers table as users start following one another. So the number of rows is going to explode quickly. Using app engine, the volume [shouldn't] be a problem. If I go with mysql, and I do actually start to get some traction, how do I scale this up? Am I going to just end up moving to a distributed database in the end anyway? Should I fight it out with google app engine? I read that Twitter was using mysql, and they've run into this problem, and are now switching to cassandra. Thanks

    Read the article

  • Regex to GENERATE thumbnails!?!?! (but that's crazy!)

    - by CryptoMonkey
    Hello everyone! So here is my situation, and the solution that I've come up with to solve the problem. I have created an application that includes TinyMCE to allow users to create HTML content for publishing. The user can include images in their markup, and drag/resize those images effecting the final Width/Height attributes in the IMG tag. This is all great, the users can include images and resize/relocate them to their desired appearance. But one big problem is that I am now sending a (possibly) much larger image to the client, only to have the browser resize the image into the requested Width/Height attributes. All that bandwidth and lost load time.... So my solution is to pre-process my users markup content, scanning all of the IMG tags and parsing out the Height/Width/Src attributes. Then set each img's SRC tag to a phpThumb request with the parsed Height/Width passed into the thumbnails URL. This will create my reduced size image (optimising bandwidth at the expense of CPU and caching). What do you think about this solution? I've seen other posts where people were using mod_rewrite to do something similar, but I want to effect the content on the page service and not manipulate the image requests as they're being received. .... Any thoughts about this design? I need some help with the fine details as my regex skills need some work, but I'm very short on time and promise to pay my technical knowledge debt soon. To make the regex's easier, I can be sure of some things. Only img tags that need this processing will have an existing width="" height="" attributes (with the double quotes, and lower cased text, but I suppose matching the text case insensitive would be better if TinyMCE changes) So a regex to match only the necessary Img tags, and maybe another three regex's to extract the src, the width, and the height? Thanks everyone.

    Read the article

  • Does jquery live slow down websites?

    - by chobo2
    Hi I have a problem I am using jquery U.I tabs that load everything with ajax. Now I have it right now everytime you click on a tab a partial view is loaded up into that tab. Now in this partial view their are javascript files that use jquery to bind all the events that are needed in that tab plus some jquery plugins I am using. Now every time that tab is loaded up all those scripts are loaded up. If it is clicked 10 times then those scripts are loaded 10times up meaing now each of say my buttons will now have 10 of the same events on it mean if someone clicks on that button 10 events will all fire off to and do the same thing. So I need to find some solution to either move all the script out and have it on the main page and use jquery.live or some other solution. I tried to do use jquery caching for the U.I tabs but this won't work since some things in say Tab A when changed effect Tab B meaning I need Tab B to be reloaded but the scripts can't reload otherwise I run into the same problem as now.

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • When clicking on ajax.actionlink in its oncomplete function I can't update the html of a div whith requested data

    - by Milka Salkova
    In one partial view I've got some ajax.ActionLinks which when clicked update the div 'importpartupdate' (they just updates the div whith new ajax.actionLinks with other routevalues). The problem is that when this update is competed I have to update another div - depending on which link is clicked . That's why in my OnComplete function of my ajax.ActionLink I make an ajax request to the action'GridViewLanguage' which returns me the partial view which should uodate this other div which claass .floatLanguage. So the first time when I click a link eeverything works correctly and my two divs are correctly updated. But the second time I click a new link it seems the the floatlanguuage div is not updated like somehow the browser is caching the previous info I don't know. \i tried with cache:false- nothing worked. @model MvcBeaWeb.GroupMenu <nav class="sidebar-nav"> <div class="divLeftShowMenu"> <ul> @{ if (Model != null) { foreach (MvcBeaDAL.WebServiceBeaMenu item in Model.MenuLeft) { <li> @Ajax.ActionLink(@item.SpecialWord, "ImportShow", new { id = Model.LanguageName, menuID = item.ID, articlegroupID = item.ArticlegroupID, counter = 1 }, new AjaxOptions { UpdateTargetId = "importPartUpdate", HttpMethod = "GET", InsertionMode = InsertionMode.Replace, OnComplete = "success("[email protected]+")" }, new { id=item.ID}) </li> } } } </ul> </div> </nav> <script> function success(ids) { var nocache = new Date().getTime(); jQuery.ajax({ url: '@Url.Action("GridLanguageView")/?menuID='+ids }).done(function (data) { $(".floatLanguage").replaceWith(data); alert(data); }); } </script>

    Read the article

  • Threading in client-server socket program - proxy sever

    - by crazyTechie
    I am trying to write a program that acts as a proxy server. Proxy server basically listens to a given port (7575) and sends the request to the server. As of now, I did not implement caching the response. The code looks like ServerSocket socket = new ServerSocket(7575); Socket clientSocket = socket.accept(); clientRequestHandler(clientSocket); I changed the above code as below: //calling the same clientRequestHandler method from inside another method. Socket clientSocket = socket.accept(); Thread serverThread = new Thread(new ConnectionHandler(client)); serverThread.start(); class ConnectionHandler implements Runnable { Socket clientSocket = null; ConnectionHandler(Socket client){ this.clientSocket = client; } @Override public void run () { try { PrxyServer.clientRequestHandler(clientSocket); } catch (Exception ex) { ex.printStackTrace(); } } } Using the code, I am able to open a webpage like google. However, if I open another web page even I completely receive the first response, I get connection reset by peer expection. 1. How can I handle this issue Can i use threading to handle different requests. Can someone give a reference where I look for example code that implements threading. Thanks. Thanks.

    Read the article

  • Ajax Update Panel Not Displaying Updated Image in IE, Chrome and FireFox

    - by jmease
    I have a parent page with an Ajax Update Panel that contains an image and a button. There is a hyperlink that opens up a child page. When the child page is submitted, there is an onclientclick event that triggers a javascript function that clicks the button in the update panel on the parent page, the button's click event being the trigger for the panel as well as the event that updates the image URL. When I use this on my android tablet, it works perfectly. However, it doesn't work at all on any browser I've used on a PC (Windows XP). The Image URL updates, but the updated image doesn't display without refreshing the entire page. In IE, I can right click on the image and click Show Image and it updates. In Chrome and Firefox, I have to refresh the entire page. Why would an Ajax control only work properly on the Android OS and what could I be doing wrong that would cause the image not to redisplay on my PC without refreshing the page even though the image URL is clearly being updated properly. I suspect a caching issue, but don't know how to correct.

    Read the article

  • Question about how AppFabric's cache feature can be used.

    - by Kevin Buchan
    Question about how AppFabric's cache feature can be used. I apologize for asking a question that I should be able to answer from the documentation, but I have read and read and searched and cannot answer this question, which leads me to believe that I have a fundamentally flawed understanding of what AppFabric's caching capabilities are intended for. I work for a geographically disperse company. We have a particular application that was originally written as a client/server application. It’s so massive and business critical that we want to baby step converting it to a better architected solution. One of the ideas we had was to convert the app to read its data using WCF calls to a co-located web server that would cache communication with the database in the United States. The nature of the application is such that everyone will tend to be viewing the same 2000 records or so with only occasional updates and those updates will be made by a limited set of users. I was hoping that AppFabric’s cache mechanism would allow me to set up one global cache and when a user in Asia, for example, requested data that was not in the cache or was stale that the web server would read from the database in the USA, provide the data to the user, then update the cache which would propagate that data to the other web servers so that they would know not to go back to the database themselves. Can AppFabric work this way or should I just have the servers retrieve their own data from the database?

    Read the article

  • How to send java.util.logging to log4j?

    - by matt b
    I have an existing application which does all of its logging against log4j. We use a number of other libraries that either also use log4j, or log against Commons Logging, which ends up using log4j under the covers in our environment. One of our dependencies even logs against slf4j, which also works fine since it eventually delegates to log4j as well. Now, I'd like to add ehcache to this application for some caching needs. Previous versions of ehcache used commons-logging, which would have worked perfectly in this scenario, but as of version 1.6-beta1 they have removed the dependency on commons-logging and replaced it with java.util.logging instead. Not really being familiar with the built-in JDK logging available with java.util.logging, is there an easy way to have any log messages sent to JUL logged against log4j, so I can use my existing configuration and set up for any logging coming from ehcache? Looking at the javadocs for JUL, it looks like I could set up a bunch of environment variables to change which LogManager implementation is used, and perhaps use that to wrap log4j Loggers in the JUL Logger class. Is this the correct approach? Kind of ironic that a library's use of built-in JDK logging would cause such a headache when (most of) the rest of the world is using 3rd party libraries instead.

    Read the article

  • How do I choose what and when to cache data with ob_start rather than query the database?

    - by Tim Santeford
    I have a home page that has several independent dynamic parts. The parts consist of a list of recent news from the company, a site statistics panel, and the online status of certain employees. The recent news changes monthly, site statistics change daily, and online statuses change on a per minute bases. I would like to cache these panels so that the db is not hit on every page load. Is using ob_start() then ob_get_contents() to cache these parts to a file the correct way to do this or is there a better method in PHP5 for doing this? In asking this question I'm trying to answer these additional questions: How can I determine the correct approach for caching this data without doing extensive benchmarking? Does it make sense to cache these parts in different files and then join them together per requests or should I re-query the data and cache once per minute? I'm looking for a rule of thumb for planning pages and for situations where doing testing is not cost effective (The client is not paying enough for it I mean). Thanks!

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • Refetching a previously visited page

    - by user613665
    All, I am having a field day with page refetching. Any help or pointer will be greatly appreciated!! The behavior is a bit specific to mobile browser. Problem: I have two pages and created a shortcut link to pg#1 in the home page. Through a form submit button, user is taken from pg#1 to pg#2. All that is working fine. Now once I am on pg#2. I will leave the browser and click the shortcut later. The browser will stay on pg#2 and won't go to pg#1 even though the path in URLS is different between the two views. It is almost like Django decides that since I have already visited view#1, it doesn't need to fetch it again. This problem or behavior doesn't happen if I move the same code that handle the two views and the templates to a bare bone test project. Setup: I am using django-registration, context session. I am not using any HTML caching tag. I already have DEBUG turned on in my settings.py. Are there other ways that I can tell what the server is doing. Thanks in advance. pdxMobile Update: Here is the code snippets. def sendmsg(request): if request.method =='POST': messages.add_message(request, messages.INFO, "Hello world") return redirect ('rcvmsg') return render_to_response('sendMsg.html',RequestContext(request)) def rcvmsg(request): '''view that receives the msg.''' printMsg ='Didnt get a message' if messages: thisMsg = messages.get_messages(request) for rcvMsg in thisMsg: printMsg = rcvMsg return render_to_response('rcvMsg.html',{'print_msg':printMsg},RequestContext(request)) URL: url(r'^rcvMsg/','mydomain.mainApp.views.rcvmsg',name='rcvmsg'), (r'^sendMsg/code','mydomain.mainApp.views.sendmsg'),

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user568249
    I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and SD 1. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • MVC3 Custom view engine

    - by eth0
    I have a custom View Engine that derives from WebFormViewEngine. There's a lot of stuff going on in here, mostly caching. I want to be able to use WebFormViewEngine AND RazorViewEngine at the same time, is this possible? Ideally I'd like to do; ViewEngines.Add(new MyViewEngine<WebFormsViewEngine>()); ViewEngines.Add(new MyViewEngine<RazorViewEngine>()); if a .ascx/.aspx/.master file exists then use WebForms, otherwise use Razor is a .cshtml file exists. EDIT: I should of worded my question better. As my custom view engine derives from WebFormViewEngine it obviously uses WebForms, I can't derive from two classes. I can derive from RazorViewEngine but then I'll loose WebForms. I can duplicate my code entirely, derive from RazorViewEngine and edit the views file extensions, etc. but as I said I've got a lot of custom code in my view engine and would be duplicating hundreds of lines. WebFormViewEngine and RazorViewEngine derive from BuildManagerViewEngine which in turn implements IViewEngine. The problem with that is I have to implement methods CreatePartialView() and CreateView() but how would I know what to return (WebForms/Razor?) using generics?

    Read the article

  • geb StaleElementReferenceException

    - by Brian Mortenson
    I have just started using geb with webdriver for automating testing. As I understand it, when I define content on a page, the page element should be looked up each time I invoke a content definition. //In the content block of SomeModule, which is part of a moduleList on the page: itemLoaded { waitFor{ !loading.displayed } } loading { $('.loading') } //in the page definition moduleItems {index -> moduleList SomeModule, $("#module-list > .item"), index} //in a test on this page def item = moduleItems(someIndex) assert item.itemLoaded So in this code, I think $('.loading') should be called repeatedly, to find the element on the page by its selector, within the context of the module's base element. Yet I sometimes get a StaleElementReference exception at this point. As far as I can tell, the element does not get removed from the page, but even if it does, that should not produce this exception unless $ is doing some caching behind the scenes, but if that were the case it would cause all sorts of other problems. Can someone help me understand what's happening here? Why is it possible to get a StaleElementReferenceException while looking up an element? A pointer to relevant documentation or geb source code would be useful as well.

    Read the article

  • MVC -DropDownList - Categories - SubCategories

    - by Frajer
    In my database I have tbales: categoreis and SubCategoreis. I would Like to create one Dropdownlist containg both of these. Something like: «Välj» <option value='1000' style='background-color:#dcdcc3;font-weight:bold;' id='cat1000' > -- FORDON -- /// this is from Categoreis Table </option> <option value='1020' id='cat1020' > Bilar /// this is from SubCategoreis </option> <option value='1040' id='cat1040' > Bildelar & Biltillbehör /// this is from Categoreis </option> <option value='1060' id='cat1060' > Båtar /// this is from Categoreis </option> <option value='1080' id='cat1080' > Båtdelar & tillbehör /// this is from Categoreis </option> Any samples how I could solve this? should I use Helpers or MVCUsercontrol??? I think that caching is importatnt in this case.Help me out ! Thanks!!

    Read the article

  • WMI Rights required to read root\MicrosoftIISv2 in IIS7 with IIS6 compatibility mode

    - by JoeBilly
    I need to manage my IIS7 (Windows Server 2008) remotely with a WMI IIS6 API. So I added the IIS6 WMI Compatibility and IIS6 Metabase Compatibility roles to access the root\MicrosoftIIsv2 namespace. I have a domain account which is not administrator on the remote machine ; with this right, everything is ok. I configured these rights for my domain account to access the root\MicrosoftIIsv2 WMI namespace remotely ; note that these rights work perfectly on a IIS6 and Windows Server 2003 : DCOM : Account in Distributed COM Users Remote & local access to DCOM WMI : Root\CIMV2 (I need access here too) Execute methods, Enable Account, Remote Enable Root\Default (I need access here too) Execute methods, Enable Account, Remote Enable Root\MicrosoftIISv2 Execute methods, Enable Account, Provider Write, Remote Enable IIS Metabase (Metabase Explorer) : LM Full Control (W3SVC inherits these permissions) I tried to give some access on C:\Windows\System32\inetsrv too ; don't know if needed. My issue is : I can't list the IIS WebSites (\root\MicrosoftIISv2:IIsWebServerSetting.Name="W3SVC/*"). I don't get an 'access denied' but nothing is returned. My API and powershell tests can connect and execute queries in the root\MicrosoftIISv2 namespace I can read the IIsComputer class ex: Get-WmiObject IIsComputer -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT * I can't read the IIsWebServerSetting, IIsWebServer ... to list the WebSites : the query returns an empty collection ex: Get-WmiObject IIsWebServerSetting -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT ServerComment All queries work perfectly if the account is administrator as already said I am using PacketPrivacy authentication FI: I got a Warning Event 5605 with the Administrator right or not, that does not seem to have an impact : The root\MicrosoftIISv2 namespace is marked with the RequiresEncryption flag. Access to this namespace might be denied if the script or application does not have the appropriate authentication level. Change the authentication level to Pkt_Privacy and run the script or application again Ok, I have some more informations, when I use IIS 6 Metabase Explorer with my administrator account I can see the rights are correctly inherited for my non-administrator account. But when I try to connect using my non-administrator account, I can list the LM node, but get an "access denied, failed to get a key's data" when I try to browse the child nodes. I'll check further. I tried to Trace the WMI Activity, and everything seems OK ; this tends to confirm that the problem lies in IIS Rights.

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >