Search Results

Search found 2188 results on 88 pages for 'boxes'.

Page 81/88 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Best practices for displaying large number of images as thumbnails in c#

    - by andySF
    I got to a point where it's very difficult to get answers by debugging and tracing object, so i need some help. What I'm trying to do: A history form for my screen capture pet project. The history must list all images as thumbnails (ex: picasa). What I've done: I created a HistoryItem:UserControl. This history item has a few buttons, a check box, a label and a picture box. The buttons are for delete/edit/copy image. The check box is used for selecting one or more images and the label is for some info text. The picture box is getting the image from a public property that is a path and a method creates a proportional thumbnail to display it when the control has been loaded. This user control has two public events. One for deleting the image and one for bubbling the events for mouse enter and mouse leave trough all controls. For this I use EventBroadcastProvider. The bubbling is useful because wherever I move the mouse over the control, the buttons appear. The dispose method has been extended and I manually remove the events. All images are loaded by looping a xml file that contains the path of all images. For each image in this XML I create a new HitoryItem that is added (after a little coding to sort and limit the amount of images loaded) to a flow layout panel. The problem: When I lunch the history form, and the flow layout panel is populated with my HistoryItem custom control, my memory usage increases drastically.From 14Mb to around 100MB with 100 images loaded. By closing the history form and disposing whatever I could dispose and even trying to call GC.Collect() the memory increase remain. I search for any object that could not be disposed properly like an image or event but wherever I used them they are disposed. The problem seams to be from multiple sources. One is that the events for bubbling are not disposing properly, and the other is from the picture box itself. All of this i could see by commenting all the code to a limited version when only the custom control without any image processing and even events is loaded. Without the events the memory consumption is reduced by axiomatically 20%. So my real question is if this logic, flow layout panels and custom controls with picture boxes, is the best solution for displaying large amounts of images as thumbnails. Thank you!

    Read the article

  • Add class to elements which already have a class

    - by bwstud
    I have a group of divs which I'm dynamically generating when a button is clicked with the class, "brick". This gives them dimension and starting position of top: 0. I'm trying to get them to animate to the bottom of the view using a css transition with a second class assignment which gives them a bottom position: 0;. Can't figure out the syntax for adding a second class to elements with a pre-existing class. On inspection they only show the original class of, "brick". HTML <!DOCTYPE html> <html> <head> <script src="http://code.jquery.com/jquery-2.1.0.min.js"></script> <meta charset="utf-8"> <title>JS Bin</title> </head> <body> <div id="container"> <div id="button" >Click Me</div> </div> </body> </html> CSS #container { width: 100%; height: 100vh; padding: 10vmax; } #button { position: fixed; } .brick { position: relative; top: 0; height: 10vmax; width: 20vmax; background: white; margin: 0; padding: 0; transition: all 1s; } .drop { transition: all 1s; bottom 0; } The offending JS: var brickCount = function() { var count = prompt("How many boxes you lookin' for?"); for(var i=0; i < count; i++) { var newBrick = document.createElement("div"); newBrick.className="brick"; document.querySelector("#container") .appendChild(newBrick); } }; var getBricks = function(){ document.getElementByClass("brick"); }; var changeColor = function(){ getBricks.style.backgroundColor = '#'+Math.floor(Math.random()*16777215).toString(16); }; var addDrop = function() { getBricks.brick = "getBricks.brick" + " drop"; }; var multiple = function() { brickCount(); getBricks(); changeColor(); addDrop(); }; document.getElementById("button").onclick = function() {multiple();}; Thanks!

    Read the article

  • JQuery Cascade dropdown problem?

    - by omoto
    Hi All! I'm using JQuery based Cascade plugin probably it's working, but I found a lot of problems with it Maybe somebody already faced with this plugin and maybe could help. So, I using this plugin for location filtration Here comes my CS code: public JsonResult getChildren(string val) { if (val.IsNotNull()) { int lId = val.ToInt(); Cookie.Location = val.ToInt(); var forJSON = from h in Location.SubLocationsLoaded(val.ToInt()) select new { When = val, Id = h.Id, Name = h.Name, LocationName = h.LocationType.Name }; return this.Json(forJSON.ToArray()); } else return null; } Here comes my JS code: <script type="text/javascript"> function commonMatch(selectedValue) { $("#selectedLocation").val(selectedValue); return this.When == selectedValue; }; function commonTemplate(item) { return "<option value='" + item.Id + "'>" + item.Name + "</option>"; }; $(document).ready(function() { $("#chained_child").cascade("#Countries", { ajax: { url: '/locations/getChildren' }, template: commonTemplate, match: commonMatch }).bind("loaded.cascade", function(e, target) { $(this).prepend("<option value='empty' selected='true'>------[%Select] Län------</option>"); $(this).find("option:first")[0].selected = true; }); $("#chained_sub_child").cascade("#chained_child", { ajax: { url: '/locations/getChildren' }, template: commonTemplate, match: commonMatch }).bind("loaded.cascade", function(e, target) { $(this).prepend("<option value='empty' selected='true'>------[%Select] Kommun------</option>"); $(this).find("option:first")[0].selected = true; }); $("#chained_sub_sub_child").cascade("#chained_sub_child", { ajax: { url: '/locations/getChildren' }, template: commonTemplate, match: commonMatch }).bind("loaded.cascade", function(e, target) { $(this).prepend("<option value='empty' selected='true'>------[%Select] Stad------</option>"); $(this).find("option:first")[0].selected = true; }); }); I added one condition to jquery.cascade.ext.js if (opt.getParentValue(parent) != "empty") $.ajax(_ajax); To prevent Ajax request without selected value, but I faced with problem, when I reset selection in first box 3d box and bellow does not refresh: And second issue: I would like to know where is best place to inject my own function that will do something, with one requirement - I need to know that all boxes finished work. If somebody worked within let me know maybe we could together find solution. Thanks in advice...

    Read the article

  • No matter what, I can't get this stupid progress bar to update from a thread!

    - by Synthetix
    I have a Windows app written in C (using gcc/MinGW) that works pretty well except for a few UI problems. One, I simply cannot get the progress bar to update from a thread. In fact, I probably can't get ANY UI stuff to update. Basically, I have a spawned thread that does some processing, and from that thread I attempt to update the progress bar in the main thread. I tried this by using PostMessage() to the main hwnd, but no luck even though I can do other things like open message boxes. However, it's unclear whether the message box is getting called within the thread or on the main thread. Here's some code: //in header/globally accessible HWND wnd; //main application window HWND progress_bar; //progress bar typedef struct { //to pass to thread DWORD mainThreadId; HWND mainHwnd; char *filename; } THREADSTUFF; //callback function LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam){ switch(msg){ case WM_CREATE:{ //create progress bar progress_bar = CreateWindowEx( 0, PROGRESS_CLASS, (LPCTSTR)NULL, WS_CHILD | WS_VISIBLE, 79,164,455,15, hwnd, (HMENU)20, NULL, NULL); break; } case WM_COMMAND:{ if(LOWORD(wParam)==2){ //do some processing in a thread //struct of stuff I need to pass to thread THREADSTUFF *threadStuff; threadStuff = (THREADSTUFF*)malloc(sizeof(*threadStuff)); threadStuff->mainThreadId = GetCurrentThreadId(); threadStuff->mainHwnd = hwnd; threadStuff->filename = (void*)&filename; hThread1 = CreateThread(NULL,0,convertFile (LPVOID)threadStuff,0,NULL); }else if(LOWORD(wParam)==5){ //update progress bar MessageBox(hwnd,"I got a message!", "Message", MB_OK | MB_ICONINFORMATION); PostMessage(progress_bar,PBM_STEPIT,0,CLR_DEFAULT); } break; } } } This all seems to work okay. The problem is in the thread: DWORD WINAPI convertFile(LPVOID params){ //get passed params, this works perfectly fine THREADSTUFF *tData = (THREADSTUFF*)params; MessageBox(tData->mainHwnd,tData->filename,"File name",MB_OK | MB_ICONINFORMATION); //yep PostThreadMessage(tData->mainThreadId,WM_COMMAND,5,0); //only shows message PostMessage(tData->mainHwnd,WM_COMMAND,5,0); //only shows message } When I say, "only shows message," that means the MessageBox() function in the callback works, but not the PostMessage() to update the position of the progress bar. What am I missing?

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Javascript auto calculating with (+) and (-)

    - by Josh
    I need some help finding the error in my javascript calculation. I need to calculate the sum of my input boxes automatically and have my user be able to edit the calculation using + or - buttons. The code I have already does the calculation automatically if you manually enter the numbers, but pressing the + or - does not change the calculation. Here is the code: <html> <head> <script language="javascript"> function Calc(className){ var elements = document.getElementsByClassName(className); var total = 0; for(var i = 0; i < elements.length; ++i){ total += parseInt(elements[i].value); } document.form0.total.value = total; } function addone(field) { field.value = Number(field.value) + 1; } function subtractone(field) { field.value = Number(field.value) - 1; } </script> </head> <body> <form name="form0" id="form0"> 1: <input type="text" name="box1" id="box1" class="add" value="0" onKeyUp="Calc('add')" onChange="updatesum()" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box1);"> <input type="button" value=" - " onclick="subtractone(box1);"> <br /> 2: <input type="text" name="box2" id="box2" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box2);"> <input type="button" value=" - " onclick="subtractone(box2);"> <br /> 3: <input type="text" name="box3" id="box3" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box3);"> <input type="button" value=" - " onclick="subtractone(box3);"> <br /> <br /> Total: <input readonly style="border:0px; font-size:14; color:red;" id="total" name="total"> </form> </body></html> Im sure the issue must be small, I just cant put my finger on it.

    Read the article

  • How to reference a particular JSON in a function call.

    - by Jane Wilkie
    Hi guys! I have two javascript routines.. the first one declares some JSON and it contains a function that takes two arguments, the first argument being the json object that needs traversing and the second argument is the tab that the rendering is done in. The second routine merely passes the name of the JSON that needs traversing and tab to render in. The code is below.... <script language="JavaScript1.2" type="text/javascript"> var arr = [ {"id":"10", "class":"child-of-9", "useless":"donotneed"}, {"id":"11", "class":"child-of-10", "useless":"donotneed"}]; var arrtwo = [ {"id":"12", "class":"child-of-12", "useless":"donotneed"}, {"id":"13", "class":"child-of-13", "useless":"donotneed"}]; function render_help(json,tab){ var html=''; for(var i=0;i<json.length;i++){ var obj = json[i]; for(var key in obj){ var attrName = key; var attrValue = obj[key]; if (attrName == "id"){ html = html +'<B>'+attrValue+'</B>'+'<BR><BR>'; }else if (attrName == "class"){ html = html + attrValue + '<BR><BR>'; } } } document.getElementById(tab).innerHTML=(html); } </script> <script language="JavaScript1.2" type="text/javascript"> render_help(arr,"helptab"); </script> Various testing and strategically placed alert boxes indicate that the tab parameter is being passed and interpreted correctly. I know this because when I change .... document.getElementById(tab).innerHTML=(html); to document.getElementById(tab).innerHTML=("Howdy"); and it renders "Howdy" just fine. Putting an alert box (alert(json)) in to check the value of json yields.... [object.Object],[object.Object] The JSON object remains elusive. For the purposes of this scripting I need the JSON "arr" to be iterated over. I feel like the answer is fairly obvious so far no luck. Admittedly I am new with Javascript and I am apparently missing something. Does anyone have a clue as to what I'm overlooking here? Happy New Year to you all! Janie

    Read the article

  • Best way to ask confirmation from user before leaving the page

    - by JohnathanKong
    Hey Everyone, I am currently building a registration page where if the user leaves, I want to pop up a CSS box asking him if he is sure or not. I can accomplish this feat using confirm boxes, but the client says that they are too ugly. I've tried using unload and beforeunload, but both cannot stop the page from being redirected. Using those to events, I return false, so maybe there's a way to cancel other than returning false? Another solution that I've had was redirecting them to another page that has my popup, but the problem with that is that if they do want to leave the page, and it wasn't a mistake, they lose the page they were originally trying to go to. If I was a user, that would irritate me. The last solution was real popup window. The only thing I don't like about that is that the main winow will have their destination page while the pop will have my page. In my opinion it looks disjoint. On top of that, I'd be worried about popup blockers. Just to add to everyones comments. I understand that it is irritating to prevent users from exiting the page, and in my opinion it should not be done. Right now I am using a confirm box at this point. What happens is that it's not actually "preventing" the user from leaving, what the client actually wants to do is make a suggestion if the user is having doubts about registering. If the user is halfway through the registraiton process and leaves for some reason, the client wants to offer the user a free coupon to a seminar (this client is selling seminars) to hopefully persuade the user to register. The client is under the impression that since the user is already on the form, he is thinking of registering, and therefore maybe a seminar of what he is registering for would be the final push to get the user to register. Ideally I don't have to prevent the user from leaving, what would be just as good, and in my opinion better is if I can pause the unload process. Maybe a sleep command? I don't really have to keep the user on the page because either way they will be leaving to go to a different page. Also, as people have stated, this is a terriable title, so if someone knows a better one, I'd really appreciate it if they could change the title to something no so spammer inviting.

    Read the article

  • Reason why UIImage gives me a 'distorted' image sometimes

    - by Cedric Vandendriessche
    I have a custom UIView with a UILabel and a UIImageView subview. (tried using UIImageView subclass aswell). I assign an image to it and add the view to the screen. I wrote a function which adds the amount of LetterBoxes to the screen (my custom class): - (void)drawBoxesForWord:(NSString *)word { if(boxesContainer == nil) { /* Create a container for the LetterBoxes (animation purposes) */ boxesContainer = [[UIView alloc] initWithFrame:CGRectMake(0, 205, 320, 50)]; [self.view addSubview:boxesContainer]; } /* Calculate width of letterboxes */ NSInteger numberOfCharacters = [word length]; CGFloat totalWidth = numberOfCharacters * 28 + (numberOfCharacters - 1) * 3; CGFloat leftCap = (320 - totalWidth) / 2; [letters removeAllObjects]; /* Draw the boxes to the screen */ for (int i = 0; i < numberOfCharacters; i++) { LetterBox *letter = [[LetterBox alloc] initWithFrame:CGRectMake(leftCap + i * 31 , 0, 28, 40)]; [letters addObject:letter]; [boxesContainer addSubview:letter]; [letter release]; }} This gives me the image below: http://www.imgdumper.nl/uploads2/4ba3b2c72bb99/4ba3b2c72abfd-Goed.png But sometimes it gives me this: imgdumper.nl/uploads2/4ba3b2d888226/4ba3b2d88728a-Fout.png I add them to the same boxesContainer but they first remove themselves from the superview, so it's not like you see them double or something. What I find weird is that they are all good or all bad.. This is the init function for my LetterBox: if (self == [super initWithFrame:aRect]) { /* Create the box image with same frame */ boxImage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height)]; boxImage.contentMode = UIViewContentModeScaleAspectFit; boxImage.image = [UIImage imageNamed:@"SpaceOpen.png"]; [self addSubview:boxImage]; /* Create the label with same frame */ letterLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, self.bounds.size.width, self.bounds.size.height)]; letterLabel.backgroundColor = [UIColor clearColor]; letterLabel.font = [UIFont fontWithName:@"ArialRoundedMTBold" size:26]; letterLabel.textColor = [UIColor blackColor]; letterLabel.textAlignment = UITextAlignmentCenter; [self addSubview:letterLabel]; } return self;} Does anyone have an idea why this could be? I'd rather have them display correctly every time :)

    Read the article

  • On saving a new record an associated id changes to 9 figure number

    - by Dave
    Hi, I have a table of venues, with each venue belonging to an area and a type. I recently dropped the table and added to it some addressline fields. I have re-migrated it but now the area_id field saves as a random? 9 figure number. Both the area_id and venuetype_id integers are created in the same way from the create new form and the venuetype_id saves as normal but not the area_id. Can anyone offer any help? whats shown in the console => [#<Venue id: 4, name: "sdf", addressline1: "", addressline2: "", addressline3 : "", addressline4: "", icontoppx: 234, iconleftpx: 234, area_id: 946717224, ven uetype_id: 8, created_at: "2011-03-17", updated_at: "2011-03-17 23:33:53">] irb(main):030:0> the area_id should be 8 in the above example. The area and venuetype id's are slected from dropdown boxes on the new venue form. new form <%= form_for @venue do |f| %> <p>name: <br> <%= f.text_field :name %></p> <p>top: <br> <%= f.text_field :icontoppx %></p> <p>left: <br> <%= f.text_field :iconleftpx %></p> <p>addressline1: <br> <%= f.text_field :addressline1 %></p> <p>addressline2: <br> <%= f.text_field :addressline2 %></p> <p>addressline3: <br> <%= f.text_field :addressline3 %></p> <p>addressline4: <br> <%= f.text_field :addressline4 %></p> <p>area: <br> <%= f.collection_select(:area_id, Area.all, :id, :name) %></p> <p>venuetype: <br> <%= f.collection_select(:venuetype_id, Venuetype.all, :id, :name) %></p> <br><br> <div class="button"><%= submit_tag %></div> <% end %> Areas table class CreateAreas < ActiveRecord::Migration def self.up create_table :areas do |t| t.string :name t.timestamps end end def self.down drop_table :areas end end Thanks very much for any help!

    Read the article

  • Line up swing components by edges

    - by rasen58
    Is it possible to line up swing components? The components are in separate panels which both use flow layout. These two panels are in another panel which is using a grid layout. As you can see there is a subtle difference and I find it annoying. I know that all of the jlabels [the rectangles in blue/purple all have the same size, so i think it might be because of the '+' and '*', but I'm not sure because the left sides of the first two boxes aren't lined up. the panels JPanel panel2 = new JPanel(new GridLayout(4, 1)); JPanel panel2a = new JPanel(new FlowLayout()); JPanel panel2b = new JPanel(new FlowLayout()); the first two rectangles (purple) add1 = new JLabel("", JLabel.CENTER); add1.setTransferHandler(new TransferHandler("text")); add1.setBorder(b2); add2 = new JLabel("", JLabel.CENTER); add2.setTransferHandler(new TransferHandler("text")); add2.setBorder(b2); the two blue rectangles textFieldA = new JTextField(); textFieldA.setHorizontalAlignment(JTextField.CENTER); textFieldA.setEditable(false); textFieldA.setBorder(new LineBorder(Color.blue)); textFieldM = new JTextField(); textFieldM.setHorizontalAlignment(JTextField.CENTER); textFieldM.setEditable(false); textFieldM.setBorder(new LineBorder(Color.blue)); the + and * opA = new JLabel("+", JLabel.CENTER); opS = new JLabel("*", JLabel.CENTER); Showing that the rectangles are the same size Dimension d = card1.getPreferredSize(); int width = d.width + 100; int height = d.height + 50; add1.setPreferredSize(new Dimension(width, height)); add2.setPreferredSize(new Dimension(width, height)); mult1.setPreferredSize(new Dimension(width, height)); mult2.setPreferredSize(new Dimension(width, height)); textFieldA.setPreferredSize(new Dimension(width, height)); textFieldM.setPreferredSize(new Dimension(width, height)); Adding to the panels panel2a.add(add1); panel2a.add(opA); panel2a.add(add2); panel2a.add(enterA); panel2a.add(textFieldA); panel2c.add(mult1); panel2c.add(opM); panel2c.add(mult2); panel2c.add(enterM); panel2c.add(textFieldM); panel2.add(panel2a); panel2.add(panel2c);

    Read the article

  • JavaScript: Can I declare a variable by querying which function is called? (Newbie)

    - by belle3WA
    I'm working with an existing JavaScript-powered cart module that I am trying to modify. I do not know JS and for various reasons need to work with what is already in place. The text that appears for my quantity box is defined within an existing function: function writeitems() { var i; for (i=0; i<items.length; i++) { var item=items[i]; var placeholder=document.getElementById("itembuttons" + i); var s="<p>"; // options, if any if (item.options) { s=s+"<select id='options"+i+"'>"; var j; for (j=0; j<item.options.length; j++) { s=s+"<option value='"+item.options[j].name+"'>"+item.options[j].name+"</option>"; } s=s+"</select>&nbsp;&nbsp;&nbsp;"; } // add to cart s=s+method+"Quantity: <input id='quantity"+i+"' value='1' size='3'/> "; s=s+"<input type='submit' value='Add to Cart' onclick='addtocart("+i+"); return false;'/></p>"; } placeholder.innerHTML=s; } refreshcart(false); } I have two different types of quantity input boxes; one (donations) needs to be prefaced with a dollar sign, and one (items) should be blank. I've taken the existing additem function, copied it, and renamed it so that there are two identical functions, one for items and one for donations. The additem function is below: function additem(name,cost,quantityincrement) { if (!quantityincrement) quantityincrement=1; var index=items.length; items[index]=new Object; items[index].name=name; items[index].cost=cost; items[index].quantityincrement=quantityincrement; document.write("<span id='itembuttons" + index + "'></span>"); return index; } Is there a way to declare a global variable based on which function (additem or adddonation) is called so that I can add that into the writeitems function so display or hide the dollar sign as needed? Or is there a better solution? I can't use HTML in the body of the cart page because of the way it is currently coded, so I'm depending on the JS to take care of it. Any help for a newbie is welcome. Thanks!

    Read the article

  • Add a fadein fade out in jQuery, on multiple conditional statements

    - by Matthew Harwood
    Task: On click of li navigation filter show and hide content with a transitional fadein fade out. Problem I'm just guessing and checking on where to place this fadein//fadeout transition. Furthermore, I feel like my code is too inefficiency because I'm using 4 conditional statements. Would stack lead me in creating a solution to improve the overall logic of this script so I can just make a pretty transition :c? LIVE CODE jQuery Script $(document).ready(function () { //attach a single click listener on li elements $('li.navCenter').on('click', function () { // get the id of the clicked li var id = $(this).attr('id'); // match current id with string check then apply filter if (id == 'printInteract') { //reset all the boxes for muliple clicks $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.web, .video').closest('.box').hide(); $(".box").find('.print').show(); } if (id == 'webInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.print, .video').closest('.box').hide(); $(".box").find('.web').show(); } if (id == 'videoInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); $(".box").find('.print, .web').closest('.box').hide() $(".box").find('.video').show(); } if (id == 'allInteract') { $(".box").find('.video, .print, .web').closest('.box').show(); } }); HTML Selected <nav> <ul class="navSpaces"> <li id="allInteract" class="navCenter"> <a id="activeAll" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/logo30px.png" /><h3>all</h3></div></a> </li> <li id="printInteract" class="navCenter"> <a id="activePrint" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/print.gif" /><h3>print</h3></div></a> </li> <li id="videoInteract" class="navCenter"> <a id="activeVideo" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/video.gif" /><h3>video</h3></div></a> </li> <li id="webInteract" class="navCenter"> <a id="activeWeb" class="navBg" href="#"><div class="relativeCenter"><img src="asset/img/web.gif" /><h3>web</h3></div></a> </li> </ul> ps. Sorry for the newbie question

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • bind9 DNS Ubuntu names pingible on server, but not on Windows Machines?

    - by leeand00
    I setup a DNS server today on Ubuntu, following this tutorial. My intent was to setup my network for dns-name resolving on the private LAN within a single zone (nothing fancy I just want name resolution). I've tested the setup on the DNS server machine itself, and I can ping all the machines listed in the configuration file. I've also configured the Windows Machines on my network, and for some reason they are incapable of pinging by names as was possible on the DNS Server itself. I've tried running nslookup on the Windows DNS clients and I receive and error mentioning the address of the DNS server. DNS forwarding works fine, I'm not having any trouble accessing the internet, the problem only lies within accessing names within the private LAN. Here are my configuration files: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { 8.8.8.8; 8.8.8.4; 74.242.0.12; //68.87.76.178; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.options zone "leerdomain.local" { type master; file "/etc/bind/zones/leerdomain.local.db"; notify no; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.2.168.192.in-addr.arpa"; notify no; }; /etc/bind/named.conf.local Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 2010011001 28800 3600 604800 38400 ); leerdomain.local. IN NS ns.leerdomain.local. ns IN A 192.168.2.9 asus IN A 192.168.2.254 www IN CNAME asus vaio IN A 192.168.2.253 iptouch IN A 192.168.2.252 toshiba IN A 192.168.2.251 gw IN A 192.168.2.1 TXT "Network Gateway" /etc/bind/zones/leerdomain.local.db (Validates fine with named-checkzone when validating zone leerdomain.local) Reverse Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 201001101 28800 604800 604800 86400 ) IN NS ns.leerdomain.local. 1 IN PTR gw.leerdomain.local. 254 IN PTR asus.leerdomain.local. 253 IN PTR vaio.leerdomain.local. 252 IN PTR iptouch.leerdomain.local. 251 IN PTR toshiba.leerdomain.local. /etc/bind/zones/rev.2.168.192.in-addr.arpa *(Does not validate with named-checkzone when validating zone leerdomain.local gives an error of: zone leerdomain.local/IN: NS 'ns.leerdomain.local' has no address records (A or AAAA) zone leerdomain.local/IN: not loaded due to errors. * Despite not validating bind9 starts without errors in /var/log/syslog I've also configured a few of the windows machines on my network to have the static ip as specified in the lookup and reverse lookup config files. i.e. Using nslookup yields the following results: C:\Users\leeand00>nslookup ns Server: UnKnown Address: 192.168.2.9 *** UnKnown can't find ns: Non-existent domain C:\Users\leeand00>nslookup gw Server: UnKnown Address: 192.168.2.9 Name: gw. Additionally trying to ping by name also fails on machines that are not the DNS Server. Is there something wrong with my configuration of either the nameserver or the Windows Boxes that is keeping me from accessing other machines using names?

    Read the article

  • Exchange Mail Flow

    - by Tuck918
    Hello. I have a question. We have one Exchange 2003 server and two Exchange 2007 servers. Most all of our mailboxes are on 2007 but we do still have one shared mailbox, unity mailbox and a journling mailbox on 2003. Public Folders have been set to replicate to 2007. I have set up a send connector on 2007 with a cost of 1. Receive connectors have Anonymous Users checked on 2007. On 2003 there are two connectors: the Internet Email connector and the connector that connects 2003 to 2007. We have a SPAM filtering device that email goes through before it is handed off to Exchange. The SPAM filtering device is set to send email to one of our Exchange 2007 servers. Here is my question/problem: Even though the SPAM filtering device is set to forward email to Exchange 2007, somehow all of our email is still going through the Exchange 2003 server before it finally hits the users mailboxes on the Exchange 2007 server. How can I change it so that all email goes directly to Exchange 2007 and never routes through Excahnge 2003 both ways, inbound and outbound? Would also like to add: In the EMC under Org- Hub- Send Connector there are two connectors. One is the "Internet Connector" from the 2003 box and the other is the new one I created. THe address space on the 2003 one is set to a cost of 2, no smart hosts and the 2003 box is listed as the Source Server. THe other Send Connector has an address space of 1, no smart host and has the 2 excahnge 2007 servers listed as the source servers. In EMC under Server- Hub- my two exchange 2007 servers are listed. Each one has 2 receive connectors. Both Recieve Connectors are setup the same way. THe Default Receive Connector has Anonymous Users checked. The other Recieve Connector is labled "Client" and I am not sure what it does or why its there. Anonymous Users are not checked. No smart hosts configured on 2003. Additional details Currently we have 3 excahnge servers. One exchange 2003 server and two excahnge 2007 servers. THe exchange 2003 server is the acting "bridgehead" serverand all email is routing through this server, inbound and outbound. We are wanting to decommission this server and use our two exchange 2007 servers as our mailbox servers. All of of user mailboxes are already on one of the exchange 2007 boxes and we want to put whats left on the exchange 2003 box on our other excahnge 2007 box. Both excahnge 2007 servers are currently CAS, HT and MB servers. We have a SPAM filtering device that sits between our excahnge servers and the firewall and have it configured to send messages to one of the excahgne 2007 servers but when we look at the message headers we can see that messgaes are still being routed to the excahnge 2003 box. We want to bypass the exchange 2003 in the routing process as it is dying and is starting to have major issues so everytime it goes down our email is down. Is there possible some sort of AD routing link/site link stuff going on?

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by user42055
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ?

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >