Search Results

Search found 340 results on 14 pages for 'hood'.

Page 10/14 | < Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • Using SharePoint label to display document version in Word 2007 doesn't work when moved to another l

    - by ITManagerWhoCodes
    I am surfacing the Document Library version of a Word 2007 document by creating a Label ({version}) within the content type of the Document Library and adding it as a Quick-part Label in the Word 2007 document. This works great. The latest version always shows up when I open the Word document. I also added this Version quick-part field to the footer of the Word document and then added this document as a document template to my content type, "ContentTypeMain". Now, I can go to my Document Library and I can create a New instance of "ContentTypeMain" with the Version field automatically there. This works great as well. However, if I create another Document Library and add the same Content Type, "ContentTypeMain" to it, the value of the Version quick-part doesn't update or refresh. The only way is to add another copy of the Label quick-part. It seems like the Quick-Part Label that maps to the Document Library Version is unique to the Document Library. My application dynamically creates subsites using site definitions and list templates. Thus the document library in each of the subsites are all being created from the same List Template. I inspected the XML files under the hood of the Word Document and it does look like there is a GUID attached to the Quick-Part Version field.

    Read the article

  • iPhone Development - CLLocationManager vs. MapKit

    - by Mustafa
    If i want to show userLocation on the map, and at the same time record the user's location, is it a good idea to add an observer to userLocation.location and record the locations, OR should i still use CLLocationManager for recording user location and use mapView.showUserLocation to show the user's current location (blue indicator)? I want to show the default blue indicator supported by the MapKit API. Also, here's a rough sample code: - (void)viewDidLoad { ... locationManager = [[CLLocationManager alloc] init]; locationManager.desiredAccuracy = kCLLocationAccuracyBest; locationManager.distanceFilter = DISTANCE_FILTER_VALUE; locationManager.delegate = self; [locationManager startUpdatingLocation]; myMapView.showUserLocation = YES; [myMapView addObserver:self forKeyPath:@"userLocation.location" options:0 context:nil]; ... } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { // Record the location information // ... } - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { NSLog(@"%s begins.", __FUNCTION__); // Make sure that the location returned has the desired accuracy if (newLocation.horizontalAccuracy <= manager.desiredAccuracy) return; // Record the location information // ... } Under the hood, i think MKMapView also uses CLLocationManager to get user's current location? So, will this create any problems because i believe both CLLocationManager and MapView will try to use same location services? Will there be any conflicts and lack of accurate/required or current data?

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Odd Infragistics UltraComboEditor data binding non-bug

    - by Richard Dunlap
    Within an Infragistics 8.2 UltraComboEditor, we had the following properties set via C#: DataSource = dataSource; ValueMember = "Measure"; DisplayMember = "Name"; DataBindings.Add("Value", repository, "Measure"); DataBindings["Value"].DataSourceUpdateMode = DataSourceUpdateMode.OnPropertyChanged; where dataSource was an array of objects, each with a property Measure, and repository was an object with a property Measure. (Those strings are actually constructor parameters -- just using explicit strings to simplify the example.) In the course of some refactoring, the name of the property on the objects in the array was changed to BaseEnum (the objects are actually wrapped enumerations, for the curious), but the name of ValueMember above was not changed. And yet, the combo box binding continued to work through initial testing, beta testing, and even after release... until two customers emailed in noting that the combo box was no longer changing the underlying parameter. We were able to dig out the problem by careful study of the source code repository... despite being in the awkward position of not being able to replicate the buggy behavior internally. Two part question: What's happening under the hood that allowed the binding to continue to function, and/or what might be unique about those two users that caused the binding to (correctly) fail? (O/S version isn't alone the answer, and we get the unexpectedly functioning binding on machines that have never had a version of the software before, so we're not looking at rogue binaries). Are there tools that might have been able to warn us about the misbind, even if something was cleaning up behind?

    Read the article

  • In C# should I reuse a function / property parameter to compute temp result or create a temporary v

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • Using cellUpdateEvent with YUI DataTable and JSON DataSource

    - by Rob Hruska
    I'm working with a UI that has a (YUI2) JSON DataSource that's being used to populate a DataTable. What I would like to do is, when a value in the table gets updated, perform a simple animation on the cell whose value changed. Here are some relevant snippets of code: var columns = [ {key: 'foo'}, {key: 'bar'}, {key: 'baz'} ]; var dataSource = new YAHOO.util.DataSource('/someUrl'); dataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; dataSource.connXhrMode = 'queueRequests'; dataSource.responseSchema = { resultsList: 'results', fields: [ {key: 'foo'}, {key: 'bar'}, {key: 'baz'} ] }; var dataTable = new YAHOO.widget.DataTable('container', columns, dataSource); var callback = function() { success: dataTable.onDataReturnReplaceRows, failure: function() { // error handling code }, scope: dataTable }; dataSource.setInterval(1000, null, callback); And here's what I'd like to do with it: dataTable.subscribe('cellUpdateEvent', function(record, column, oldData) { var td = dataTable.getTdEl({record: record, column: column}); YAHOO.util.Dom.setStyle(td, 'backgroundColor', '#ffff00'); var animation = new YAHOO.util.ColorAnim(td, { backgroundColor: { to: '#ffffff'; } }); animation.animate(); }; However, it doesn't seem like using cellUpdateEvent works. Does a cell that's updated as a result of the setInterval callback even fire a cellUpdateEvent? It may be that I don't fully understand what's going on under the hood with DataTable. Perhaps the whole table is being redrawn every time the data is queried, so it doesn't know or care about changes to individual cells?. Is the solution to write my own specific function to replace onDataReturnReplaceRows? Could someone enlighten me on how I might go about accomplishing this? Edit: After digging through datatable-debug.js, it looks like onDataReturnReplaceRows won't fire the cellUpdateEvent. It calls reset() on the RecordSet that's backing the DataTable, which deletes all of the rows; it then re-populates the table with fresh data. I tried changing it to use onDataReturnUpdateRows, but that doesn't seem to work either.

    Read the article

  • Looped jQuery slideshow with smooth cross-fades

    - by artlung
    I'm trying to do a simple rotating image on the home page. Under the hood I'm reading a directory and then populating urls for the images into an array. What I want to do is cross-fade the images. If it was just a matter of showing the next one, it's easy, but since I need to cross-fade, it's a bit harder. I think what I want to do is do the fades by calling animate() on the opacity value of the <img> tag, and in between swapping out the css background-image property of the enclosing <div>. But the results are not that great. I've used tools for more full featured slideshows, but I don't want the overhead of adding a plugin if I can avoid it, and a simple crossfade seems like it should be easier. Here's my JavaScript (I'm using jQuery 1.3.2): var slideshow_images = ["http:\/\/example.com\/wordpress\/wp-content\/themes\/testtheme\/sidebar-home-bg\/bg1.jpg","http:\/\/example.com\/wordpress\/wp-content\/themes\/testtheme\/sidebar-home-bg\/bg2.jpg","http:\/\/example.com\/wordpress\/wp-content\/themes\/testtheme\/sidebar-home-bg\/bg3.jpg"]; var slideshow_index = 0; var delay = 4000; var swapSlides = function() { var slideshow_count = slideshow_images.length; // initialize the background to be the current image $('#home-slideshow').css({ 'background-image': 'url(' + slideshow_images[slideshow_index] + ')', 'background-repeat:': 'no-repeat', 'width': 200, 'overflow': 'hidden' }); slideshow_index = ((slideshow_index + 1) == slideshow_count) ? 0 : slideshow_index + 1; // fade out the img $('#home-slideshow img').animate({opacity: 0}, delay); // now, the background is visible // next change the url on the img $('#home-slideshow img').attr('src', slideshow_images[slideshow_index]); // and fade it up $('#home-slideshow img').animate({opacity: 1.0}, delay); // do it again setTimeout('swapSlides()', 4000); } jQuery(document).ready(function(){ if (swapSlides) { swapSlides(); } }); And here's the markup I'm using: <div id="home-slideshow"><img src="http://example.com/wordpress/wp-content/themes/testtheme/sidebar-home-bg/bg1.jpg" alt="" /></div>

    Read the article

  • Positioning SVG Elements

    - by Rob Wilkerson
    In the course of toying with SVG for the first time (using the Raphael library), I've run into a problem positioning dynamic elements on the canvas in such a way that they're completely contained within the canvas. What I'm trying to do is randomly position n words/short phrases. Since the text is variable, its position needs to be variable as well so what I'm doing is: Initially creating the text at point 0,0 with no opacity. Checking the width of the drawn text element using text.getBBox().width. Setting a new x coordinate as Math.random() * (canvas_width - ( text_width/2 ) - pad). Altering the x coordinate of the text to the newly set value (text.attr( 'x', x ) ). Setting the opacity attribute of the text to 1. I'll be the first to admit that my math acumen is limited, but this seems pretty straightforward. Somehow, I still end up with text running off beyond the right edge of my canvas. For simplicity above, I removed the bit that also sets a minimum x value by adding it to the Math.random() result. It is there, though, and I see the same problem on the leading edge of the canvas. My understanding (such as it is), is that the Math.random() bits would generate a number between 0 and 1 which could then be multiplied by some number (in my case, the canvas width - half of the text width - some arbitrary padding) to get the outer bound. I'm dividing the width of the text in half because its position on the grid is set at its center. I hope I've just been staring at this for too long, but is my math that rusty or am I misunderstanding something about the behavior of Math.random(), SVG, text or anything else that's under the hood of this solution?

    Read the article

  • crash when linking swc with Alchemy

    - by paleozogt
    I have a project I'm trying to compile with alchemy. It will compile .o and .a files, but when trying to create a .swc, it will fail. It appears to crash with this error: g++ -swc -o mylib.swc my-flex-interface.cpp mylib.a Cannot yet select: 0x279c810: ch,flag = AVM2ISD::CALL - A call instruction 0x279c7a0, 0x29c4350 0 llc 0x00636dfe _ZNSt8_Rb_treeIN4llvm3sys4PathES2_St9_IdentityIS2_ESt4lessIS2_ESaIS2_EE13insert_uniqueERKS2_ + 6078 1 llc 0x006373a2 _ZNSt8_Rb_treeIN4llvm3sys4PathES2_St9_IdentityIS2_ESt4lessIS2_ESaIS2_EE13insert_uniqueERKS2_ + 7522 2 libSystem.B.dylib 0x9530942b _sigtramp + 43 3 ??? 0xffffffff 0x0 + 4294967295 4 libSystem.B.dylib 0x953968e5 raise + 26 5 libSystem.B.dylib 0x953ac99c abort + 93 6 llc 0x002f4fe0 _ZN98_GLOBAL__N__Volumes_data_dev_FlaCC_llvm_2.1_lib_Target_AVM2_AVM2ISelDAGToDAG.cpp_00000000_F04616B616AVM2DAGToDAGISel6Emit_7ERKN4llvm9SDOperandEj + 0 7 llc 0x002f8e1b _ZN98_GLOBAL__N__Volumes_data_dev_FlaCC_llvm_2.1_lib_Target_AVM2_AVM2ISelDAGToDAG.cpp_00000000_F04616B616AVM2DAGToDAGISel10SelectCodeEN4llvm9SDOperandE + 2219 8 llc 0x002fa193 _ZN98_GLOBAL__N__Volumes_data_dev_FlaCC_llvm_2.1_lib_Target_AVM2_AVM2ISelDAGToDAG.cpp_00000000_F04616B616AVM2DAGToDAGISel10SelectRootEN4llvm9SDOperandE + 819 9 llc 0x002e6a2c _ZN4llvm19X86_64TargetMachineD0Ev + 65116 10 llc 0x003de4ca _ZN4llvm11StoreSDNodeD1Ev + 1610 11 llc 0x0040d3fe _ZN4llvm11StoreSDNodeD1Ev + 193918 12 llc 0x0040f92e _ZN4llvm11StoreSDNodeD1Ev + 203438 13 llc 0x005d1926 _ZN4llvm12FunctionPassD1Ev + 20998 14 llc 0x005d1f3a _ZN4llvm12FunctionPassD1Ev + 22554 15 llc 0x005d20c5 _ZN4llvm12FunctionPassD1Ev + 22949 16 llc 0x00002e44 0x0 + 11844 17 llc 0x00001f36 0x0 + 7990 18 ??? 0x00000006 0x0 + 6 make[2]: *** [src/app/alchemy/sonic.swc] Error 6 make[1]: *** [src/app/alchemy/CMakeFiles/alchemy.dir/all] Error 2 make: *** [all] Error 2 I'm not familiar enough with LLVM (which Alchemy uses under the hood) to figure out what this error means. Any ideas?

    Read the article

  • What type of webapp is the sweet spot for Scala's Lift framework?

    - by ajay
    What kind of applications are the sweet spot for Scala's lift web framework. My requirements: Ease of development and maintainability Ready for production purposes. i.e. good active online community, regular patches and updates for security and performance fixes etc. Framework should survive a few years. I don't want to write a app in a framework for which no updates/patches are available after 1 year. Has good UI templating engines Interoperation with Java (Scala satisfies this arleady. Just mentioning here for completeness sake) Good component oriented development. Time required to develop should be proportion to the complexity of web application. Should not be totally configuration based. I hate it when code gets automatically generated for me and does all sorts of magic under the hood. That is a debugging nightmare. Amount of Lift knowledge required to develop a webapp should be proportional to the complexity of the web application. i.e I should't have to spend 10+ hours learning Lift just to develop a simple TODO application. (I have knowledge of Databases, Scala) Does Lift satisfy these requirements?

    Read the article

  • Does this use of Monitor.Wait/Pulse have a race condition?

    - by jw
    I have a simple producer/consumer scenario, where there is only ever a single item being produced/consumed. Also, the producer waits for the worker thread to finish before continuing. I realize that kind of obviates the whole point of multithreading, but please just assume it really needs to be this way (: This code doesn't compile, but I hope you get the idea: // m_data is initially null // This could be called by any number of producer threads simultaneously void SetData(object foo) { lock(x) // Line A { assert(m_data == null); m_data = foo; Monitor.Pulse(x) // Line B while(m_data != null) Monitor.Wait(x) // Line C } } // This is only ever called by a single worker thread void UseData() { lock(x) // Line D { while(m_data == null) Monitor.Wait(x) // Line E // here, do something with m_data m_data = null; Monitor.Pulse(x) // Line F } } Here is the situation that I am not sure about: Suppose many threads call SetData() with different inputs. Only one of them will get inside the lock, and the rest will be blocked on Line A. Suppose the one that got inside the lock sets m_data and makes its way to Line C. Question: Could the Wait() on Line C allow another thread at Line A to obtain the lock and overwrite m_data before the worker thread even gets to it? Supposing that doesn't happen, and the worker thread processes the original m_data, and eventually makes its way to Line F, what happens when that Pulse() goes off? Will only the thread waiting on Line C be able to get the lock? Or will it be competing with all the other threads waiting on Line A as well? Essentially, I want to know if Pulse()/Wait() communicate with each other specially "under the hood" or if they are on the same level with lock(). The solution to these problems, if they exist, is obvious of course - just surround SetData() with another lock - say, lock(y). I'm just curious if it's even an issue to begin with.

    Read the article

  • How does Java handle ArrayList refrerences and assignments?

    - by Jonathan
    Hey all- I mostly write in C but am using Java for this project. I want to know what Java is doing here under the hood. ArrayList<Integer> prevRow, currRow; currRow = new ArrayList<Integer>(); for(i =0; i < numRows; i++){ prevRow = currRow; currRow.clear(); currRow.addAll(aBunchOfItems); } Is the prevRow = currRow line copying the list or does prevRow now point to the same list as currRow? If prevRow points to the same list as currRow, I should create a new ArrayList instead of clearing.... private ArrayList<Integer> someFunction(ArrayList<Integer> l){ Collections.sort(l); return l; } main(){ ArrayList<Integer> list = new ArrayList<Integer>(Integer(3), Integer(2), Integer(1)); list = someFunction(list); //Option 1 someFunction(list); //Option 2 } In a similar question, do Option 1 and Option 2 do the same thing in the above code? Thanks- Jonathan

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • How to make a good programming interview?

    - by luckyluke
    I am doing interviews with from time to time to recruit some not bad people. And I really think I AM NOT doing to correct Job. I work in a company when We have to do a lot o DB programming, .NET programming, Java programming, so we need people who are open minded and not focused on a particular tech. Afterall language is a notation, You have to understand what is going under the hood. I ask people about their project, ask them some coding questions (believe me a SQL question involving a CROSS JOIN is hard), let them write some code, ask them about oo design, ask them how they update their knowledge, and stay up to date, do they have FUN when they code (at least sometimes). Hell I even give them a coding solution for home (3 hours max) to see how they think and code. And yet my hit rate at hiring junior member (those who live over the initial 3 months) is just about 33%. So my question, how do YOU make the good interviews, because I think my hit rate is to low? Do you have any best-practices(should be at least 60-70%)? p.s. And i noticed that: the best programmers are lazy, but motivated, just being lazy is not enough:) But people who write the best code are attentive to details:)

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >