Search Results

Search found 1976 results on 80 pages for 'nic wise'.

Page 76/80 | < Previous Page | 72 73 74 75 76 77 78 79 80  | Next Page >

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • wxpython - Nested Notebooks

    - by madtowneast
    I have been trying to make my nested notebooks a little bit more appealing code wise. At the moment, I got this #!/usr/bin/env python import os import sys import datetime import numpy as np from readmonifile import MonitorFile from sortmonifile import sort import wx class NestedPanelOne(wx.Panel): #---------------------------------------------------------------------- # First notebook that creates the tab to select the component number #---------------------------------------------------------------------- def __init__(self, parent, label, data): wx.Panel.__init__(self, parent=parent, id=wx.ID_ANY) sizer = wx.BoxSizer(wx.VERTICAL) #Loop creating the tabs according to the component name nestedNotebook = wx.Notebook(self, wx.ID_ANY) for slabel in sorted(data[label].keys()): tab = NestedPanelTwo(nestedNotebook, label, slabel, data) nestedNotebook.AddPage(tab,slabel) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(nestedNotebook, 1, wx.ALL|wx.EXPAND, 5) self.SetSizer(sizer) class NestedPanelTwo(wx.Panel): #------------------------------------------------------------------------------ # Second notebook that creates the tab to select the main monitoring variables #------------------------------------------------------------------------------ def __init__(self, parent, label, slabel, data): wx.Panel.__init__(self, parent=parent, id=wx.ID_ANY) sizer = wx.BoxSizer(wx.VERTICAL) nestedNotebook = wx.Notebook(self, wx.ID_ANY) for sslabel in sorted(data[label][slabel][data[label][slabel].keys()[0]].keys()): tab = NestedPanelThree(nestedNotebook, label, slabel, sslabel, data) nestedNotebook.AddPage(tab, sslabel) sizer.Add(nestedNotebook, 1, wx.ALL|wx.EXPAND, 5) self.SetSizer(sizer) class NestedPanelThree(wx.Panel): #------------------------------------------------------------------------------- # Third notebook that creates checkboxes to select the monitoring sub-variables #------------------------------------------------------------------------------- def __init__(self, parent, label, slabel, sslabel, data): wx.Panel.__init__(self, parent=parent, id=wx.ID_ANY) labels=[] chbox =[] chboxdict={} for ssslabel in sorted(data[label][slabel][data[label][slabel].keys()[0]][sslabel].keys()): labels.append(ssslabel) for item in list(set(labels)): cb = wx.CheckBox(self, -1, item) chbox.append(cb) chboxdict[item]=cb gridSizer = wx.GridSizer(np.shape(list(set(labels)))[0],3, 5, 5) gridSizer.AddMany(chbox) self.SetSizer(gridSizer) ######################################################################## class NestedNotebookDemo(wx.Notebook): #--------------------------------------------------------------------------------- # Main notebook creating tabs for the monitored components #--------------------------------------------------------------------------------- def __init__(self, parent, data): wx.Notebook.__init__(self, parent, id=wx.ID_ANY, style= wx.BK_DEFAULT ) for label in sorted(data.keys()): print label tab = NestedPanelOne(self,label, data) self.AddPage(tab, label) ######################################################################## class DemoFrame(wx.Frame): #---------------------------------------------------------------------- # Putting it all together #---------------------------------------------------------------------- def __init__(self,data): wx.Frame.__init__(self, None, wx.ID_ANY, "pDAQ monitoring plotting tool", size=(800,400) ) panel = wx.Panel(self) notebook = NestedNotebookDemo(panel, data) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(notebook, 1, wx.ALL|wx.EXPAND, 5) panel.SetSizer(sizer) self.Layout() #Menu Bar to be added later ''' menubar = wx.MenuBar() file = wx.Menu() file.Append(1, '&Quit', 'Exit Tool') menubar.Append(file, '&File') self.SetMenuBar(menubar) self.Bind(wx.EVT_MENU, self.OnClose, id=1) ''' self.Show() #---------------------------------------------------------------------- if __name__ == "__main__": if len(sys.argv) == 1: raise SystemExit("Please specify a file to process") for f in sys.argv[1:]: data=sort.sorting(f) print data['stringHub'].keys() print data.keys() print data[data.keys()[0]].keys() print 'test' app = wx.PySimpleApp() frame = DemoFrame(data) app.MainLoop() print 'testend' and I would like to reduce this whole mess into something that only has three nested for loops, so something like for label in sorted(data.keys()): self.SubNoteBooks[label] = wx.Notebook(self.Notebook, wx.ID_ANY) self.Notebook.AddPage(self.SubNoteBooks[label], label) for slabel in sorted(data[label].keys()): self.SubNoteBooks[label][slabel] = wx.Notebook(self, wx.ID_ANY) self.SubNoteBooks[label].AddPage(self.SubNoteBooks[label][slabel], slabel) for sslabel in sorted(data[label][slabel][data[label][slabel].keys()[0]].keys()): self.SubNoteBooks[label][slabel][sslabel] = wx.Notebook(self.Notebook, wx.ID_ANY) self.Notebook.AddPage(self.SubNoteBooks[label][slabel][sslabel], sslabel) I have been trying to fiddle this around but the problem seems to be the line self.SubNoteBooks[label][slabel] = wx.Notebook(self, wx.ID_ANY) I get the error: Traceback (most recent call last): File "./reducelinenumbers.py", line 162, in <module> frame = DemoFrame(data) File "./reducelinenumbers.py", line 126, in __init__ self.SubNoteBooks[label][slabel] = wx.Notebook(self, wx.ID_ANY) TypeError: 'Notebook' object does not support item assignment I understand why notebook is being type raises a TypeError here. Is there a way around this? Thanks a bunch in advance.

    Read the article

  • Overwhelmed by design patterns... where to begin?

    - by Pete
    I am writing a simple prototype code to demonstrate & profile I/O schemes (HDF4, HDF5, HDF5 using parallel IO, NetCDF, etc.) for a physics code. Since focus is on IO, the rest of the program is very simple: class Grid { public: floatArray x,y,z; }; class MyModel { public: MyModel(const int &nip1, const int &njp1, const int &nkp1, const int &numProcs); Grid grid; map<string, floatArray> plasmaVariables; }; Where floatArray is a simple class that lets me define arbitrary dimensioned arrays and do mathematical operations on them (i.e. x+y is point-wise addition). Of course, I could use better encapsulation (write accessors/setters, etc.), but that's not the concept I'm struggling with. For the I/O routines, I am envisioning applying simple inheritance: Abstract I/O class defines read & write functions to fill in the "myModel" object HDF4 derived class HDF5 HDF5 using parallel IO NetCDF etc... The code should read data in any of these formats, then write out to any of these formats. In the past, I would add an AbstractIO member to myModel and create/destroy this object depending on which I/O scheme I want. In this way, I could do something like: myModelObj.ioObj->read('input.hdf') myModelObj.ioObj->write('output.hdf') I have a bit of OOP experience but very little on the Design Patterns front, so I recently acquired the Gang of Four book "Design Patterns: Elements of Reusable Object-Oriented Software". OOP designers: Which pattern(s) would you recommend I use to integrate I/O with the myModel object? I am interested in answering this for two reasons: To learn more about design patterns in general Apply what I learn to help refactor an large old crufty/legacy physics code to be more human-readable & extensible. I am leaning towards applying the Decerator pattern to myModel, so I can attach the I/O responsibilities dynamically to myModel (i.e. whether to use HDF4, HDF5, etc.). However, I don't feel very confident that this is the best pattern to apply. Reading the Gang of Four book cover-to-cover before I start coding feels like a good way to develop an unhealthy caffeine addiction. What patterns do you recommend?

    Read the article

  • Mootools 1.2.4 delegation not working in IE8...?

    - by michael
    Hey there everybody-- So I have a listbox next to a form. When the user clicks an option in the select box, I make a request for the related data, returned in a JSON object, which gets put into the form elements. When the form is saved, the request goes thru and the listbox is rebuilt with the updated data. Since it's being rebuilt I'm trying to use delegation on the listbox's parent div for the onchange code. The trouble I'm having is with IE8 (big shock) not firing the delegated event. I have the following HTML: <div id="listwrapper" class="span-10 append-1 last"> <select id="list" name="list" size="20"> <option value="86">Adrian Franklin</option> <option value="16">Adrian McCorvey</option> <option value="196">Virginia Thomas</option> </select> </div> and the following script to go with it: window.addEvent('domready', function() { var jsonreq = new Request.JSON(); $('listwrapper').addEvent('change:relay(select)', function(e) { alert('this doesn't fire in IE8'); e.stop(); var status= $('statuswrapper').empty().addClass('ajax-loading'); jsonreq.options.url = 'de_getformdata.php'; jsonreq.options.method = 'post'; jsonreq.options.data = {'getlist':'<?php echo $getlist ?>','pkey':$('list').value}; jsonreq.onSuccess = function(rObj, rTxt) { status.removeClass('ajax-loading'); for (key in rObj) { status.set('html','You are currently editing '+rObj['cname']); if ($chk($(key))) $(key).value = rObj[key]; } $('lalsoaccomp-yes').set('checked',(($('naccompkey').value > 0)?'true':'false')); $('lalsoaccomp-no').set('checked',(($('naccompkey').value > 0)?'false':'true')); } jsonreq.send(); }); }); (I took out a bit of unrelated stuff). So this all works as expected in firefox, but IE8 refuses to fire the delegated change event on the select element. If I attach the change function directly to the select, then it works just fine. Am I missing something? Does IE8 just not like the :relay? Sidenote: I'm very new to mootools and javascripting, etc, so if there's something that can be improved code-wise, please let me know too.. Thanks!

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

  • Unit Testing - Am I doing it right?

    - by baron
    Hi everyone, Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your storing data etc... but anyway... Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc) FruitID FruitName FruitColor FruitCost 1 Apple Red 1.2 2 Apple Green 1.4 3 Apple HalfHalf 1.5 This is all just for example. But lets say I have this is a collection of Fruit (it's a List<Fruit>) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be). E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2. Now I want to test the code ive written which does the reordering, etc. How can I set this up to do the test? Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list. Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly. [Test] public void DeleteFruit() { var fruitList = CreateFruitList(); var fm = new FruitManager(); var resultList = fm.DeleteFruitTest("Apple", 2, fruitList); //Assert that fruitobject with x properties is not in list ? how } private static List<Fruit> CreateFruitList() { //Build test data var f01 = new Fruit {Name = "Apple",Id = 1, etc...}; var f02 = new Fruit {Name = "Apple",Id = 2, etc...}; var f03 = new Fruit {Name = "Apple",Id = 3, etc...}; var fruitList = new List<Fruit> {f01, f02, f03}; return fruitList; }

    Read the article

  • To Interface or Not?: Creating a polymorphic model relationship in Ruby on Rails dynamically..

    - by Globalkeith
    Please bear with me for a moment as I try to explain exactly what I would like to achieve. In my Ruby on Rails application I have a model called Page. It represents a web page. I would like to enable the user to arbitrarily attach components to the page. Some examples of "components" would be Picture, PictureCollection, Video, VideoCollection, Background, Audio, Form, Comments. Currently I have a direct relationship between Page and Picture like this: class Page < ActiveRecord::Base has_many :pictures, :as => :imageable, :dependent => :destroy end class Picture < ActiveRecord::Base belongs_to :imageable, :polymorphic => true end This relationship enables the user to associate an arbitrary number of Pictures to the page. Now if I want to provide multiple collections i would need an additional model: class PictureCollection < ActiveRecord::Base belongs_to :collectionable, :polymorphic => true has_many :pictures, :as => :imageable, :dependent => :destroy end And alter Page to reference the new model: class Page < ActiveRecord::Base has_many :picture_collections, :as => :collectionable, :dependent => :destroy end Now it would be possible for the user to add any number of image collections to the page. However this is still very static in term of the :picture_collections reference in the Page model. If I add another "component", for example :video_collections, I would need to declare another reference in page for that component type. So my question is this: Do I need to add a new reference for each component type, or is there some other way? In Actionscript/Java I would declare an interface Component and make all components implement that interface, then I could just have a single attribute :components which contains all of the dynamically associated model objects. This is Rails, and I'm sure there is a great way to achieve this, but its a tricky one to Google. Perhaps you good people have some wise suggestions. Thanks in advance for taking the time to read and answer this.

    Read the article

  • CKEdtior not displaying

    - by user1708468
    I am trying to integrate CKEditor into a MVC application. As far as I can tell all I should really have to do is. Add the following to my master page. <script type="text/javascript" src="../../ckeditor/ckeditor.js"></script> <script type="text/javascript" src="../../ckeditor/adapters/jquery.js"></script> <script type="text/jscript" src="../../Scripts/jquery-1.3.2.js"></script> Then on my view itself. I have the following code: <script type="text/javascript"> $(document).ready(function() { $('#news').ckeditor(); }); </script> <fieldset> <legend>Fields</legend> <p> <label for="title">Title:</label> <%=Html.TextBox("title")%> <%= Html.ValidationMessage("title", "*") %> </p> <p> <label for="news">News:</label> <%=Html.TextArea("news")%> <%= Html.ValidationMessage("news", "*") %> </p> <p> <label for="publishedDate">Publication Date:</label> <%= Html.TextBox("publishedDate") %> <%= Html.ValidationMessage("publishedDate", "*") %> </p> <p> <input type="submit" value="Create" /> </p> </fieldset> Please bear in mind I am not trying to get this to actually DO anything postback wise. Just to actually render in the first place. Can someone point out exactly what it is I am doing wrong? Oh and if it helps any VS is also giving me the following warning: Warning 1 Error updating JScript IntelliSense: ..Cut to Protect the innocent..\ckeditor\ckeditor.js: 'getFirst()' is null or not an object @ 15:180 ..Cut to Protect the innocent..\Views\Shared\Admin.Master 1 1 ilaTraining

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • Solve a maze using multicores?

    - by acidzombie24
    This question is messy, i dont need a working solution, i need some psuedo code. How would i solve this maze? This is a homework question. I have to get from point green to red. At every fork i need to 'spawn a thread' and go that direction. I need to figure out how to get to red but i am unsure how to avoid paths i already have taken (finishing with any path is ok, i am just not allowed to go in circles). Heres an example of my problem, i start by moving down and i see a fork so one goes right and one goes down (or this thread can take it, it doesnt matter). Now lets ignore the rest of the forks and say the one going right hits the wall, goes down, hits the wall and goes left, then goes up. The other thread goes down, hits the wall then goes all the way right. The bottom path has been taken twice, by starting at different sides. How do i mark this path has been taken? Do i need a lock? Is this the only way? Is there a lockless solution? Implementation wise i was thinking i could have the maze something like this. I dont like the solution because there is a LOT of locking (assuming i lock before each read and write of the haveTraverse member). I dont need to use the MazeSegment class below, i just wrote it up as an example. I am allowed to construct the maze however i want. I was thinking maybe the solution requires no connecting paths and thats hassling me. Maybe i could split the map up instead of using the format below (which is easy to read and understand). But if i knew how to split it up i would know how to walk it thus the problem. How do i walk this maze efficiently? The only hint i receive was dont try to conserve memory by reusing it, make copies. However that was related to a problem with ordering a list and i dont think the hint was a hint for this. class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; int line_length; bool haveTraverse; } MazeSegment root; class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; bool haveTraverse; } void WalkPath(MazeSegment segment) { if(segment.haveTraverse) return; segment.haveTraverse = true; foreach(var v in segment) { if(v.haveTraverse == false) spawn_thread(v); } } WalkPath(root);

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

  • CodePlex Daily Summary for Monday, May 03, 2010

    CodePlex Daily Summary for Monday, May 03, 2010New Projects.radiko: エアログラス採用のシンプルなradiko(http://radiko.jp/)クライアントです。タスクトレイのアイコンからラジオ局の切り替えができます。7Scale: EmptyB2C MVC Plattform: The B2C MVC Plattform aims to be pluggable site framework to help small busisness accomplish basic tasks between business and customers.ElValWeb: The goal of the project to create full featured implementation of ModelValidatorProvider for Enterprise Library Application Validation Block, wich ...esatis yazilimi: asp.net yazılımı ile satış magazasi websitesi kur.IEnumerable.It sample code: IEnumerable.It sample codejQuery MicroAjax for ASP.NET: MicroAjax is a set of jQuery plugins and .NET components designed to provide simple, powerful and efficient Ajax centric web application design pat...Karbon VOS: Karbon VOS is an advanced Virtual Operating System Template for Visual Basic Express. It's developed in Visual Basic. Karbon VOS hopes to one day b...LINQ Mapper: LINQ Mapper translates simple LINQ queries between different sources. It allows you to write queries against your domain model, but have them run ...Meccano Silverlight Framework: Meccano is a new generation of frameworks for creation of LOB Silverlight applications based on MEF, RX, WCF, ADO.NET Data Services etc. It is inte...Multiuse Model View (MMV) Library: This project is an open source library for the Multiuse Model View (MMV) pattern for building robust WPF and ASP.Net applications. Visit my blog ht...Process Affinity Control: Process Affinity Control allows to set the affinity masks of processes based on rules.SilverSpatial: This project helps bridge the gap between Silverlight and Geo-Spatial data type (such as SQL Spatial). It implements the Well-Known-Binary (WKB) fo...StageAssets: Application for storing data about "things" and people in theatre. For example equipment, actors and so on.Stratosphere: Mono compatible library with set of primitives to work with scalable table, queue and block containers with corresponding implementations for Amazo...TRX Web-Viewer: A simple web-based application to upload and view VSTS 2008 and VSTS 2010 test result files with some basic lookup and feature-wise management of r...WDT2: WDT 2 is the school project to begin learning .NET enviroment, The main focus is on learning the use of almost all the componenets.WPF Behavior Library: WPF Behavior Library is a set of additional actions for WPF that allow you to add extra behaviors to a control quickly and easily. Currently the on...YouTubeEmbeddedVideo WebControl for ASP.NET: A Control to embed YouTube videos in ASP.NET pages. Works in C# and VB.NETNew Releases.radiko: beta: 東京局のみ対応 あとは手抜きActiveWorlds Managed .NET SDK: AwManaged Technology Preview - WIN32 (Alpha): This WIN32 release contains the Server Console Application. The Setup executable should be run as administrator on O.S. using UAC (Vista/Win7)AJAX Control Framework: v1.0.1.0: v1.0.1.0 - Contains a Bing Maps sample project, a number of bug fixes and a few performance improvements. - AJAX enable ANY custom control that der...App_Code (and Usercontrol) Editor (ACE): v1.0.0 alpha: The first alpha release of the AppCode Editor for Umbraco 4.0.3 is now available to download! Tested to work with usercontrols - pre-compilation wi...ElValWeb: ElValWeb 0.0.1.0: Version 0.0.1.0 contains client validation support forAndCompositeValidator ContainsCharactersValidator DomainValidator NotNullValidator Or...esatis yazilimi: magaza: magazanın yazılımları ve veri tabanının yazılımlarıGrunty OS: Grunty OS USB: Download Grunty OS for USBGrunty OS: Grunty OS.ISO: Grunty OS ISOKarbon VOS: Milestone 1 (Kaptua): Milestone 1...Live Meeting API Wrapper: LiveMeetingAPIWrapperV1.2: Added get meeting and update meeting.Multiuse Model View (MMV) Library: v0.3: first alpha release. Medium amount of functionality and some use cases tested.MVC Foolproof Validation: Beta 0.9.3774: Adds resource provided error messages, regular expression operators and a new RegularExpressionIf attribute.Process Affinity Control: Version 1.0.0: This is the first release. Planned features for the next release: No administrative privileges needed to run the manager Select the active scena...SharePoint 2010 Service Manager: SharePoint 2010 Service Manager 1.1: Added support to run under UAC with automatic security elevationSharePoint Event Handler Manager: Event Handler Manager 2.0: Please download the application here: http://www.ackermantech.com/registerevents.aspxSkyDrive Synchronizer: SkyDrive Sync Beta 0.1: Beta release includes: Upload and download Synchronize updated files Delete files on web/locally if not in source Split larger files into sma...Stratosphere: Stratosphere 1.0.0.0: Initial beta releaseSuggested Resources for .NET Developers: 0.8.0.0 VS2010 - focus on displaying content: This is the first release of Suggested Resources that can be downloaded from the internet. While there is still a lot of work to be done this rele...TRX Web-Viewer: TRX Web-Viewer V1.0: First working versionVCC: Latest build, v2.1.30502.0: Automatic drop of latest buildWatchersNET.TagCloud: WatchersNET.TagCloud 01.04.00: !Whats New New Tag Mode: Search Referrers (Shows Search Tags From Google, Ask, Bing, Yahoo and the Dnn Site Search) Taxonomy Tags now contains L...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 Beta 2e: Fully featured beta version of Visual WebGui Web/Cloud Applicaiton Development FrameworkWPF Behavior Library: WPF Behavior Library 0.1 Release: First alpha release of the WPF Behavior Library. It should be stable but doesn't have all of the features it will have in the future and the API ma...xvanneste: Sharepoint Social Network Client: Client permettant d'avoir accés au social network de sharepoint a l'exterieur du navigateur.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryRawrHydroServer - CUAHSI Hydrologic Information System ServerAJAX Control Frameworkpatterns & practices: Azure Security GuidanceTinyProjectBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDambach Linear Algebra Framework

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • OpenIndiana (illumos): vmxnet3 interface lost on reboot

    - by protomouse
    I want my VMware vmxnet3 interface to be brought up with DHCP on boot. I can manually configure the NIC with: # ifconfig vmxnet3s0 plumb # ipadm create-addr -T dhcp vmxnet3s0/v4dhcp But after creating /etc/dhcp.vmxnet3s0 and rebooting, the interface is down and the logs show: Aug 13 09:34:15 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:15 neumann vmxnet3s: [ID 715698 kern.notice] vmxnet3s:0: stop() Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:17 neumann vmxnet3s: [ID 920500 kern.notice] vmxnet3s:0: start() Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(TxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxBufPoolLimit) -> 512 Aug 13 09:34:17 neumann nwamd[491]: [ID 605049 daemon.error] 1: nwamd_set_unset_link_properties: dladm_set_linkprop failed: operation not supported Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x20000) -> no Aug 13 09:34:17 neumann nwamd[491]: [ID 751932 daemon.error] 1: nwamd_down_interface: ipadm_delete_addr failed on vmxnet3s0: Object not found Aug 13 09:34:17 neumann nwamd[491]: [ID 819019 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv4 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 160156 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv6 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 771489 daemon.error] 1: add_ip_address: ipadm_create_addr failed on vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 405346 daemon.error] 9: start_dhcp: ipadm_create_addr failed for vmxnet3s0: Operation not supported on disabled object I then tried disabling network/physical:nwam in favour of network/physical:default. This works, the interface is brought up but physical:default fails and my network services (e.g. NFS) refuse to start. # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vmxnet3s0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:1: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:2: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:3: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:4: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:5: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:6: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:7: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:8: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 vmxnet3s0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 9000 index 2 inet6 ::/0 # cat /var/svc/log/network-physical\:default.log [ Aug 16 09:46:39 Enabled. ] [ Aug 16 09:46:41 Executing start method ("/lib/svc/method/net-physical"). ] [ Aug 16 09:46:41 Timeout override by svc.startd. Using infinite timeout. ] starting DHCP on primary interface vmxnet3s0 ifconfig: vmxnet3s0: DHCP is already running [ Aug 16 09:46:43 Method "start" exited with status 96. ] NFS server not running: # svcs -xv network/nfs/server svc:/network/nfs/server:default (NFS server) State: offline since August 16, 2012 09:46:40 AM UTC Reason: Service svc:/network/physical:default is not running because a method failed. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:default Reason: Service svc:/network/physical:nwam is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:nwam Reason: Service svc:/network/nfs/nlockmgr:default is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/network/nfs/nlockmgr:default See: man -M /usr/share/man -s 1M nfsd Impact: This service is not running. I'm new to the world of Solaris, so any help solving would be much appreciated. Thanks!

    Read the article

  • KVM/Libvirt bridged/routed networking not working on newer guest kernels

    - by SharkWipf
    I have a dedicated server running Debian 6, with Libvirt (0.9.11.3) and Qemu-KVM (qemu-kvm-1.0+dfsg-11, Debian). I am having a problem getting bridged/routed networking to work in KVM guests with newer kernels (2.6.38). NATted networking works fine though. Older kernels work perfectly fine as well. The host kernel is at version 3.2.0-2-amd64, the problem was also there on an older host kernel. The contents of the host's /etc/network/interfaces (ip removed): # Loopback device: auto lo iface lo inet loopback # bridge auto br0 iface br0 inet static address 176.9.xx.xx broadcast 176.9.xx.xx netmask 255.255.255.224 gateway 176.9.xx.xx pointopoint 176.9.xx.xx bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 up route add -host 176.9.xx.xx dev br0 # VM IP post-up mii-tool -F 100baseTx-FD br0 # default route to access subnet up route add -net 176.9.xx.xx netmask 255.255.255.224 gw 176.9.xx.xx br0 The output of ifconfig -a on the host: br0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 inet addr:176.9.xx.xx Bcast:176.9.xx.xx Mask:255.255.255.224 inet6 addr: fe80::5604:a6ff:fe8a:6613/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20216729 errors:0 dropped:0 overruns:0 frame:0 TX packets:19962220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14144528601 (13.1 GiB) TX bytes:7990702656 (7.4 GiB) eth0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26991788 errors:0 dropped:12066 overruns:0 frame:0 TX packets:19737261 errors:270082 dropped:0 overruns:0 carrier:270082 collisions:1686317 txqueuelen:1000 RX bytes:15459970915 (14.3 GiB) TX bytes:6661808415 (6.2 GiB) Interrupt:17 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6240133 errors:0 dropped:0 overruns:0 frame:0 TX packets:6240133 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6081956230 (5.6 GiB) TX bytes:6081956230 (5.6 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:79:e4:5a inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:225016 errors:0 dropped:0 overruns:0 frame:0 TX packets:412958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16284276 (15.5 MiB) TX bytes:687827984 (655.9 MiB) virbr0-nic Link encap:Ethernet HWaddr 52:54:00:79:e4:5a BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vnet0 Link encap:Ethernet HWaddr fe:54:00:93:4e:68 inet6 addr: fe80::fc54:ff:fe93:4e68/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:607670 errors:0 dropped:0 overruns:0 frame:0 TX packets:5932089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:83574773 (79.7 MiB) TX bytes:1092482370 (1.0 GiB) vnet1 Link encap:Ethernet HWaddr fe:54:00:ed:6a:43 inet6 addr: fe80::fc54:ff:feed:6a43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:922132 errors:0 dropped:0 overruns:0 frame:0 TX packets:6342375 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:251091242 (239.4 MiB) TX bytes:1629079567 (1.5 GiB) vnet2 Link encap:Ethernet HWaddr fe:54:00:0d:cb:3d inet6 addr: fe80::fc54:ff:fe0d:cb3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9461 errors:0 dropped:0 overruns:0 frame:0 TX packets:665189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4990275 (4.7 MiB) TX bytes:49229647 (46.9 MiB) vnet3 Link encap:Ethernet HWaddr fe:54:cd:83:eb:aa inet6 addr: fe80::fc54:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1649 errors:0 dropped:0 overruns:0 frame:0 TX packets:12177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:77233 (75.4 KiB) TX bytes:2127934 (2.0 MiB) The guest's /etc/network/interfaces, in this case running Ubuntu 12.04 (ip removed): # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 176.9.xx.xx netmask 255.255.255.248 gateway 176.9.xx.xx # Host IP pointopoint 176.9.xx.xx # Host IP dns-nameservers 8.8.8.8 8.8.4.4 The output of ifconfig -a on the guest: eth0 Link encap:Ethernet HWaddr 52:54:cd:83:eb:aa inet addr:176.9.xx.xx Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::5054:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14190 errors:0 dropped:0 overruns:0 frame:0 TX packets:1768 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2614642 (2.6 MB) TX bytes:82700 (82.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:954 errors:0 dropped:0 overruns:0 frame:0 TX packets:954 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:176679 (176.6 KB) TX bytes:176679 (176.6 KB) Output of ping -c4 on the guest: PING google.nl (173.194.35.151) 56(84) bytes of data. 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=1 ttl=55 time=14.7 ms From static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx): icmp_seq=2 Redirect Host(New nexthop: static.161.82.9.176.clients.your-server.de (176.9.82.161)) 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=2 ttl=55 time=15.1 ms From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=3 Destination Host Unreachable From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=4 Destination Host Unreachable --- google.nl ping statistics --- 4 packets transmitted, 2 received, +2 errors, 50% packet loss, time 3002ms rtt min/avg/max/mdev = 14.797/14.983/15.170/0.223 ms, pipe 2 The static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx) is the host's IP. I have encountered this problem with every guest OS I've tried, that being Fedora, Ubuntu (server/desktop) and Debian with an upgraded kernel. I've also tried compiling the guest kernel myself, to no avail. I have no problem with recompiling a kernel, though the host cannot afford any downtime. Any ideas on this problem are very welcome. EDIT: I can ping the host from inside the guest.

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TV not detected by Windows/VGA - when there is a WHDI device in the signal chain

    - by ashwalk
    I'm at my wit's end with this one... I had an EVGA GTS 250, and I used to plug it's HDMI out into a WHDI sender, which transmitted to its corresponding WHDI receiver 15ft away, which then connected to a Samsung LN40D LCD TV through another HDMI cable. PC/VGA < [hdmi cable] < WHDI sender <[air] WHDI receiver < [hdmi cable] < TV It was perfect, stable, no perceivable latency. I just plugged everything the first time and it worked instantly. It sent 5.1 audio, and Windows/nVidia Control Center detected the TV by its name. The WHDI device is this one: http://goo.gl/Q8iWI5 Now I bought an EVGA GTX 650, and WHDI doesn't work anymore. Both Windows and nVidia Control Center won't detect the TV, only the monitor that's connected via DVI. The TV screen shows "TX202913 connected. Check video signal." on top of a black screen. Though the device is not the problem itself, just the fact that it's not allowing direct connection between PC and TV. I would bet that if put an AVR in its place I'd also have this issue. The HDMI on this new card works with other monitors. If I put the older card back, WHDI works normally. I have googled this for 5 months on and off. Once I bumped into a page that showed how to force a display device to always-on through registry edit. Once I restarted windows, the Tv (through WHDI) displayed my expanded or duplicated desktop at 1024x768 ONLY, and listed the display as "digital display". I could not change the resolution and it wouldn't playback audio (although the option was available at nVidia Control Center HDMI audio options, but did not work). This proves that there is no conflict between the devices, except that software-wise, Windows cannot, for the life of it, understand that there's a TV there to send video/audio to. Since this won't do (no audio, poor video), I reverted this regedit. It's also not an EDID problem within the TV, since when connected directly it works. The last weird bit of this saga is that today, I reminded of Windows' "Add Device" dialog, gave it a go, and a "Samsung Generic UPNP TV" showed up, which I promptly installed the drives for, rising to a climax of... ...NOTHING HAPPENING. As far as I can tell, it really didn't change anything other than using up a few kb in my main disc. I should also say that I looked a LOT into handshake problems and nothing applied either. Do any of you have an idea of what may be going on? I can't stand the thought of having a us$200 device not working because of the addition of a newer graphics card, when the much older one had no issues. There is absolutely NO REASON for this to happen. There is NO documentation on WHDI online. Apparently no one buys this stuff. For the same reason, no one responded to this same plea for help on NVidia and EVGA forums. Worst case, this can be a warning about this setup for people in the future. Thanx in advance.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80  | Next Page >