Search Results

Search found 2888 results on 116 pages for 'scale'.

Page 101/116 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Convert IIS / Tomcat Web Application to a multi-server environment.

    - by bill_the_loser
    I have an existing web application built in .Net, running on IIS that leverages a java servlet that we have running on Tomcat 5.5. We need to scale the application and I'm confused about what relates to our situation and what we need to do to get the servlet running on multiple servers. Right now I have 4 servers that can each individually process results, it almost seems like all I should have to do is add the ajp13 worker processes from three additional machines to the machine hosting the load balancer worker. But I can't imagine it should be that easy. What do I need to do to distribute the Tomcat load to the extra three machines? Thanks. Update: The current configuration is using a workers2.properties configuration file. From all of the documentation online I have not been able to determine the distinction between the workers.properties and the workers2.properties. Most of the examples that I have found are configuring the workers.properties and revolve around adding workers and registering them in the worker.list element. The workers2.properties does not appears to have a worker.list element and the syntax is different enough between the workers.properties and the workers2.properties that I'm doubtful that I can add that element. If I just add my multiple AJP workers to the workers2.properties file do I need to worry about the apparent lack of a worker.list element? [ajp13:localhost:8009] channel=channel.socket:localhost:8009 group=lb [ajp13:host2.mydomain.local:8009] channel=channel.socket:host2.mydomain.local:8009 group=lb [ajp13:host3.mydomain.local:8009] channel=channel.socket:host3.mydomain.local:8009 group=lb A couple of side notes... One I've noticed that sometime Tomcat doesn't seem to reload my changes and I don't know why. Also, I have no idea why this configuration has a workers2.properties and not a workers.properties. I've been assuming that it's based on version but I haven't seen anything to back up that assumption.

    Read the article

  • Crystal Reports Cross Tab Conditional Formatting

    - by ltran
    I would like to achieve a simplified result similar to the "Color Scale" function in Excel i.e. gradient colouring based on the lowest value (red) to highest value (green), except in my cross tab using Crystal Reports 2008. My cross tab looks a little like this: HOURS L1 L2 L3 L4 Total 1 hours | 5 | 0 | 1 | 16 | 22 | 2 hours | 0 | 1 | 0 | 10 | 11 | 3 hours | 8 | 2 | 6 | 12 | 28 | TOTAL |13 | 3 | 7 | 38 | 61 | The principle of my function is find the maximum value in the cross table then use 20%, 40%, 60%, 80% values to colour the background. Function is as follows (in the format background section): if currentfieldvalue < ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.2) then color(255,0,0) else if (currentfieldvalue >= ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.2) and currentfieldvalue < ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.4)) then color(255,192,0) else if (currentfieldvalue >= ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.4) and currentfieldvalue < ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.6)) then color(255,255,0) else if (currentfieldvalue >= ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.6) and currentfieldvalue < ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.8)) then color(146,208,80) else if (currentfieldvalue >= ((Maximum (MakeArray (CurrentColumnIndex, CurrentRowIndex, 1)))*0.8)) then color(0,176,80) It's not elegant, nor does it work properly, any assistance/suggestions would be much appreciated. I wasn't expecting it to be so complicated as originally I was working with the below assuming it would work, except it tells me that "CurrentFieldValue" is not a field. if CurrentFieldValue < ((Maximum (CurrentFieldValue))*0.2) then color(255,0,0) else if ... etc.

    Read the article

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Setting Android webview initialScale prevents proper zooming

    - by Ryan
    Need: I want a webview to automatically be sized to fit the width of that particular page. I have Googled this and found several different suggestions. Most of them work. But then each of them effects zooming in / zooming out. What I'm looking for is a solution that accomplishes both. The webview is initially set to fill the screen, but then it allows the user to zoom in (with pinching) and zoom out. What I've Tried: mainView.getSettings().setLoadWithOverviewMode(true); mainView.getSettings().setUseWideViewPort(false); mainView.setScrollBarStyle(WebView.SCROLLBARS_OUTSIDE_OVERLAY); mainView.getSettings().setBuiltInZoomControls(true); mainView.getSettings().setSupportZoom(true); I have also tried setting mainView.setInitialScale(various percentages) Again, I have tried these in different orders, including some, not including others. Currently, if I use the above code and setInitialScale(65), it loads initially fine but then once you zoom in, you cannot zoom all the way back out. Does anyone know of the best practice to set initial scale to fit screen but fully allow zooming out and in? Why I Need It: I'm using a ViewFlipper in my Android app to load several webviews simultaneously. I have a touch sensor that allows sliding from left to right to switch between different webviews. The practical purpose of this is to show a grocery store's ads and allow the user to slide from page to page. The problem is that the API feed I'm using basically only allows me to load a URL for each page. So I have to use webviews.

    Read the article

  • Javascript Application Book

    - by Jormundir
    Can anyone recommend a good book on Javascript module/application development. I'm a Software Engineer, so I don't need all the intro to programming stuff. What I'm really looking for is: How do you bundle the html/css/javascript together so that you can make one include that will load the whole application. I.e.: <div id="myapplication"></div> ... ... <script src="myapplication.js"> Design patterns are always welcome. I've already read Javascript the good parts, and online guides, but it's hard to find a comprehensive guide/tutorial for specifically this. There's a lot of good "this is a javascript application" and "this is a scalable framework", but I haven't had any luck with "This is how you build a javascript application, including the html and css, and this is how you deliver it nicely". I'm building a small application to start, so I'm not interested in scalability and large-scale development practices, just a nice and comprehensive guide to get me off the ground.

    Read the article

  • Forcing size of a complex Flash object.

    - by John
    As I've found recently, setting width/height properties on a Sprite only forces the Sprite to fit the given dimensions by scaling the actual size, which is calculated by Flash based on the rendered content. This leaves me confused. If I have a custom Sprite subclass which draws using Graphics, how can I do layout before an initial render - the size will be zero until it is first drawn? For a more complex issue, let's say I have a 2D game world with objects spread over a wide area with world coordinates from (0,0) to (5000,5000), where each object has a size of maybe up to 100x100. I want to have a Flash component which is the "game window", and has a fixed size like 400x300, rendering part of the game world. So how do I force the game window size to 400x300 pixels? I can draw a 400x300 rectangle to get the size correct but then if I draw any objects which are partly in-view, they can screw this up. Is the right approach to provide a custom setSize(w,h) method which is used rather than width & height setters? But even so, is there no way to make a Sprite force to the size I want? Do you really have to catch it every render and re-scale it?

    Read the article

  • Designing for varying mobile device resolutions, i.e. iPhone 4 & iPhone 3G

    - by Josh
    As the design community moves to design applications & interfaces for mobile devices, a new problem has arisen: Varying Screen DPI's. Here's the situation: Touch: * iPhone 3G/S ~ 160 dpi * iPhone 4 ~ 300 dpi * iPad ~ 126 dpi * Android device @ 480p ~ 200 dpi Point / click: * Laptop @ 720p ~ 96 dpi * Desktop @ 720p ~ 72 dpi There is certainly a clear distinction between desktop and mobile so having two separate front-ends to the same app is logical, especially when considering one is "touch"-based and the other is "point/click"-based. The challenge lies in designing static graphical elements that will scale between, say, 160 dpi and 300+ dpi, and get consistent and clean design across zoom levels. Any thoughts on how to approach this? Here are some scenarios, but each has drawbacks as well: * Design a single set of assets (high resolution), then adjust zoom levels based on detected resolution / device o Drawbacks: Performance caused by code layering, varying device support of Zoom * Develop & optimize multiple variations of image and CSS assets, then hide / show each based on device o Drawbacks: Extra work in design & QA. Anyone have thoughts or experience on how to deal with this? We should certainly be looking at methods that use / support HTML5 and CSS3.

    Read the article

  • MVC design pattern in complex iPad app: is one fat controller acceptable?

    - by nutsmuggler
    I am building a complex iPad application; think of it as a scrapbook. For the purpose of this question, let's consider a page with two images over it. My main view displays my doc data rendered as a single UIImage; this because I need to do some global manipulation over them. This is my DisplayView. When editing I need to instantiate an EditorView with my two images as subviews; this way I can interact with a single image, (rotate it, scale it, move it). When editing is triggered, I hide my DisplayView and show my EditorView. In a iPhone app, I'd associate each main view (that is, a view filling the screen) to a view controller. The problem is here there is just one view controller; I've considered passing the EditorView via a modal view controller, but it's not an option (there a complex layout with a mask covering everything and palettes over it; rebuilding it in the EditorView would create duplicate code). Presently the EditorView incorporates some logic (loads data from the model, invokes some subviews for fine editing, saves data back to the model); EditorView subviews also incorporate some logic (I manipulate images and pass them back to the main EditorView). I feel this logic belongs more to a controller. On the other hand, I am not sure making my only view controller so fat a good idea. What is the best, cocoa-ish implementation of such a class structure? Feel free to ask for clarifications. Cheers.

    Read the article

  • Python "callable" attribute (pseudo-property)

    - by mgilson
    In python, I can alter the state of an instance by directly assigning to attributes, or by making method calls which alter the state of the attributes: foo.thing = 'baz' or: foo.thing('baz') Is there a nice way to create a class which would accept both of the above forms which scales to large numbers of attributes that behave this way? (Shortly, I'll show an example of an implementation that I don't particularly like.) If you're thinking that this is a stupid API, let me know, but perhaps a more concrete example is in order. Say I have a Document class. Document could have an attribute title. However, title may want to have some state as well (font,fontsize,justification,...), but the average user might be happy enough just setting the title to a string and being done with it ... One way to accomplish this would be to: class Title(object): def __init__(self,text,font='times',size=12): self.text = text self.font = font self.size = size def __call__(self,*text,**kwargs): if(text): self.text = text[0] for k,v in kwargs.items(): setattr(self,k,v) def __str__(self): return '<title font={font}, size={size}>{text}</title>'.format(text=self.text,size=self.size,font=self.font) class Document(object): _special_attr = set(['title']) def __setattr__(self,k,v): if k in self._special_attr and hasattr(self,k): getattr(self,k)(v) else: object.__setattr__(self,k,v) def __init__(self,text="",title=""): self.title = Title(title) self.text = text def __str__(self): return str(self.title)+'<body>'+self.text+'</body>' Now I can use this as follows: doc = Document() doc.title = "Hello World" print (str(doc)) doc.title("Goodbye World",font="Helvetica") print (str(doc)) This implementation seems a little messy though (with __special_attr). Maybe that's because this is a messed up API. I'm not sure. Is there a better way to do this? Or did I leave the beaten path a little too far on this one? I realize I could use @property for this as well, but that wouldn't scale well at all if I had more than just one attribute which is to behave this way -- I'd need to write a getter and setter for each, yuck.

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • iPad Simulator Multitouch Cursors Don't Show Up When Window is Scaled 100%

    - by Joel
    I have the iPhone SDK 3.2 installed and been working on an iPad application. However, the iPad simulator doesn't show the two gray multitouch "cursors" when I hold down the ALT/OPTION button and move the mouse around. This only happens when the simulator scale size is set to 100%. If I have it set to 50% they show up. When I have it set to be an iPhone, they show up. It's only iPad 100% size. The multitouch still works fine, I just can't see where I'm "touching". I've trying closing the simulator completely, changing from the iPhone and back again. Resizing. All sorts of stuff. Has anyone else seen this problem? Anyone have any suggestions for fixing this? I've googled and searched SOF for anyone else having this problem, but I kinda wonder if it's just me. If it makes a difference I have a Mac Mini 1.83 GHz Intel Core 2 Duo with Snow Leopard 10.6.3 installed. Thanks.

    Read the article

  • Make a div content (googlemap) fullscreen

    - by lena2211
    Hi iam trying to make a button that will turn the googlemap div into fullscreen.. this is what i have untill now, but it is not working correctly .. problem is: the map will only half loaded the code is below, and a screenshot how can i repair this? where is the problem? thanks in advance http://img32.imageshack.us/img32/9365/halfload.gif <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> function initialize() { var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } function fs() { var elem = document.getElementById("map_canvas"); elem.style.position="absolute"; elem.style.width="100%"; elem.style.height="100%"; elem.style.top="0px"; document.body.style.overflow = "hidden"; } </script> </head> <body onload="initialize()"> <div id="map_canvas" style="width:400px; height:300px"></div> <a href="#" onclick ="fs()">makefullscreen</a> </body> </html>

    Read the article

  • syntax for MySQL INSERT with an array of columns

    - by Mike_Laird
    I'm new to PHP and MySQL query construction. I have a processor for a large form. A few fields are required, most fields are user optional. In my case, the HTML ids and the MySQL column names are identical. I've found tutorials about using arrays to convert $_POST into the fields and values for INSERT INTO, but I can't get them working - after many hours. I've stepped back to make a very simple INSERT using arrays and variables, but I'm still stumped. The following line works and INSERTs 5 items into a database with over 100 columns. The first 4 items are strings, the 5th item, monthlyRental is an integer. $query = "INSERT INTO `$table` (country, stateProvince, city3, city3Geocode, monthlyRental) VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; When I make an array for the fields and use it, as follows: $colsx = array('country,', 'stateProvince,', 'city3,', 'city3Geocode,', 'monthlyRental'); $query = "INSERT INTO `$table` ('$colsx') VALUES ( '$country', '$stateProvince', '$city3', '$city3Geocode', '$monthlyRental')"; I get a MySQL error - check the manual that corresponds to your MySQL server version for the right syntax to use near ''Array') VALUES ( 'US', 'New York', 'Fairport, Monroe County, New York', '(43.09)' at line 1. I get this error whether the array items have commas inside the single quotes or not. I've done a lot of reading and tried many combinations and I can't get it. I want to see the proper syntax on a small scale before I go back to foreach expressions to process $_POST and both the fields and values are arrays. And yes, I know I should use mysql_real_escape_string, but that is an easy later step in the foreach. Lastly, some clues about the syntax for an array of values would be helpful, particularly if it is different from the fields array. I know I need to add a null as the first array item to trigger the MySQL autoincrement id. What else? I'm pretty new, so please be specific.

    Read the article

  • Jquery image slider window resize scaling issue

    - by eb_Dev
    Hi, I have created image gallery slider, the images of which scale with the window size. My problem is I can't seem to create the right formula to ensure that the slider is at the right position when the window is scaled. If I am at the first image and the slider 'left' offset is 0 then the scaling works fine and my offset is correct, however when I am on image two and my slider is offset by say -1500 and then the window is resized, the slider is then at the wrong offset. Taking this into account I figured I could just add / subtract the new window size difference from the slider offset and therefore have everything in the right position. I was wrong. It works for the first two images but then If I am at the end of my slider on the last image I get a nice big gap between the end of the last image and the border of the page. Can someone please show my where I am going wrong please? I've spent days on this :( The relevant code is as follows: $(window).bind('resize', function(event, delta) { /* Resize work images - resizes all to document width */ resizeWorkImages('div.page.work div#work ul'); /* Position slider controls */ positionSliderControls(); /* Get the difference between the old window width and the new */ var windowDiff = (gLastWindowWidth - $(window).width()) * -1; /* Apply difference to slider left position */ var newSliderLeftPos = stripPx($('div.page.work div#work ul').css('left')) - windowDiff; /* Apply to slider */ $('div.page.work div#work ul').css('left',newSliderLeftPos+'px') /* Update gSlider settings */ gSlider.update(); /* Record new window width */ gLastWindowWidth = $(window).width(); }); Thanks, eb_dev

    Read the article

  • How to store a captured image into MySQL database using JavaScript

    - by R J.
    I am capturing image using canvas and i want to store a captured image in MySQL Database using Javascript. This is my code: <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, maximum-scale=1.0"> <style> body {width: 100%;} canvas {display: none;} </style> <title>Instant Camera - Remote</title> <script> var video, canvas, msg; var load = function () { video = document.getElementById('video'); canvas = document.getElementById('canvas'); msg = document.getElementById('error'); if( navigator.getUserMedia ) { video.onclick = function () { var context = canvas.getContext("2d"); context.drawImage(video, 0, 0, 240, 320); var image1 = canvas.toDataURL("image/png"); document.write('<img src="' + image1 + '" />'); }; } else { msg.innerHTML = "Native web camera not supported :("; } }; window.addEventListener('DOMContentLoaded', load, false); </script> </head> <body> <video id="video" width="240" height="320" autoplay> </video> <p id="error">Click on the video to send a snapshot to the receiving screen</p> <canvas id="canvas" width="240" height="320"> </canvas> </body> </html>

    Read the article

  • Load image blurred Android

    - by Mira
    I'm trying to create a map for a game through an image, where each black pixel is equivalent to a wall, and yellow to flowers(1) and green grass(0) so far i had this image (50x50): http://i.imgur.com/Ydj9Cp2.png the problem here seems to be that, when i read the image on my code, it get's scaled up to 100x100, even tough i have it on the raw folder. I can't let it scale up or down because that will put noise and blur on the image and then the map won't be readable. here i have my code: (...) Bitmap tab=BitmapFactory.decodeResource(resources, com.example.lolitos2.R.raw.mappixel); //tab=Bitmap.createScaledBitmap(tab, 50, 50, false); Log.e("w", tab.getWidth()+"."+tab.getHeight()); for (int i = 0; i < tab.getWidth(); i++) { for (int j = 0; j < tab.getHeight(); j++) { int x = j; int y = i; switch (tab.getPixel(x, y)) { // se o é uma parede case Color.BLACK: getParedes()[x][y] = new Parede(x, y); break; case Color.GREEN: fundo.add(new Passivo(x,y,0)); break; default: fundo.add(new Passivo(x,y,1)); } } } How can i read my image Map without rescaling it?

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >