Search Results

Search found 18811 results on 753 pages for 'dynamic memory allocation'.

Page 11/753 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Another memory leak question

    - by James
    Hello My application keeps running for 4 to 6 hours, During this time there is no contineous increase in memory or anything similar. Then after 4 to 6 hours i start getting EOutofMemory Exceptions. Even still at that time there is only 900MB out of 3GB RAM is used as per task manager. And application itself is not using more then 200MB . Then why i am getting EOoutofMEmory error ? Does it mean that memory leak is not necessarily visible in task manager ? Regagards

    Read the article

  • Windows Server task manager displays much higher memory use than sum of all processes' working set s

    - by Sleepless
    I have a 16 GB Windows Server 2008 x64 machine mostly running SQL Server 2008. The free memory as seen in Task Manager is very low (128 MB at the moment), i.e. about 15.7 GB are used. So far, so good. Now when I try to narrow down the process(es) using the most memory I get confused: None of the processes have more than 200MB Working Set Size as displayed in the 'Processes' tab of Task Manager. Well, maybe the Working Set Size isn't the relevant counter? To figure that out I used a PowerShell command [1] to sum up each individual property of the process object in sort of a brute force approach - surely one of them must add up to the 15.7 GB, right? Turns out none of them does, with the closest being VirtualMemorySize (around 12.7 GB) and PeakVirtualMemorySize (around 14.7 GB). WTF? To put it another way: Which of the numerous memory related process information is the "correct" one, i.e. counts towards the server's physical memory as displayed in the Task Manager's 'Performance' tab? Thank you all! [1] $erroractionpreference="silentlycontinue"; get-process | gm | where-object {$.membertype -eq "Property"} | foreach-object {$.name; (get-process | measure-object -sum $_.name ).sum / 1MB}

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic mod_rewrite or how to plan a dynamic website

    - by Sophia Gavish
    Hi, I'm trying to make a clean url for a blog on a dynamic website, but I think that the problem is that I don't know how to plan the website schema. I read about how to use mod_rewrite and all I found is how to make "http://www.website.com/?category&date&post-title" to "http://www.website.com/category/date/post-title". that's works o.k for me. The problem is that If my url looks like "http://www.website.com/blog/?id=34" this method won't work as far as I got it. So, I have two questions: 1. Is there a way to use mod_rewrite (maybe read from a txt file) to read the post title of my blog and rewrite my url by date and post-title? 2. Should I rewrite my website to query the data from one index file in the homepage and use mod_rewrite to write the nice url? should I query also the date and the title of the post instead just the post ID?

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Trying to find video of a talk on the impact of memory access latency

    - by user12889
    Some months ago I stumbled across a video on the internet of somebody giving a very good talk on the impact of memory access latency on the execution of programs. I'm trying to find the video again; maybe you know what video I mean and were I can find it. This is what I remember about the talk/video: I don't remember the title and it may have been broader, but the talk was a lot about impact of memory access latency in modern processors on program execution. The talk was in English and most likely the location was in America. The speaker was very knowledgeable about the topic, but the talk was in an informal setting (not a conference presentation or university lecture). I think the speaker was known to the audience and may even have been famous (I don't remember) The audience may have been a computer club / group of a local community or company (but I don't remember for sure)

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #001

    - by pinaldave
    I am introducing a new series today.  This series is called “Memory Lane.”  From the last six years and 2,300 articles, there are fantastic articles I keep revisiting.  Sometimes when I read old blog posts I think I should have included something or added a bit more to the topic.  But for many articles, I still feel they are fantastic (even after six years) and could be read again and again. I have also found that after six years of blogging, readers will write to me and say “Pinal, why don’t you write about X, Y or Z.”  The answer is: I already did!  It is here on the blog, or in the comments, or possibly in one of my books.  The solution has always been there, it is simply a matter of finding it and presenting it again.  That is why I have created Memory Lane.  I will be listing the best articles from the same week of the past six years.  You will find plenty of reading material every Saturday from articles of SQLAuthority past. Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Display Foreign Key Relationships and Name of the Constraint for Each Table in Database My blogging journey began with this blog post. As many of you know my journey began with creating a repository of my scripts. This was very first script which I had written to find out foreign key relationship and constraints. The same query was updated later on using the new SYS schema modification in SQL Server. Version 1: Using sys.schema Version 2: Using sys.schema and additional columns 2007 Milestone Posts – 1 Year (365 blogs) and 1 Million Views When I reached 1st week of Nov in 2007 SQLAuthority.com blog had around 365 blog posts and 1 Million Views. I was not obsessed with the statistics before but this was indeed an interesting moment for me as I was blogging for myself and did not realize that so many people are reading my blog. In year 2006 there were not many bloggers so blogging was new to me as well. I was learning it as I go. 2008 Stored Procedure WITH ENCRYPTION and Execution Plan If you have stored procedure and its code is encrypted when you execute it what will be displayed in the execution plan. There are two kinds of execution plans 1) Estimated and 2) Actual. It will be indeed interesting to know what is displayed in both the cases when Stored Procedure is encrypted. What is your guess? Now go ahead and click on here and figure out your answer. If the user is not able to login into SQL Server due to any error or issues there were two different blog post addresses the same issue here and here. 2009 It seems like Nov is the month of SQLPASS month. In 2009 on the same week I was in USA attending SQLPASS event. I had a fantastic experience attending the event. Here are the blog posts covering the subject Day 1, Day 2, Day 3, Day 4 2010 Finding the last backup time for all the databases This little script is very powerful and instantly gives details when was the last time your database backup performed. If you are reading this blog post – I say just go ahead and check if everything is alright on your server and you have all the necessary latest backup. It is better to be safe than sorrow. Version 1: Above script was improved to get more details about the database Version 2: This version of the script will include pretty much have all the backup related information in a single script. Do not miss to save it for future use. Are you a Database Administrator or a Database Developer? Three years ago I created a very small survey and the results which I have received are very interesting. The question was asking what is the profile of the visitor of that blog post and I noticed that DBA and Developers have balanced with little inclination towards Developers. Have you voted so far? If not, go ahead! 2011 New Book Released – SQL Server Interview Questions And Answers One year ago, on November 3, 2011 I published my book SQL Server Interview Questions and Answers.  The book has a lot of great reviews, and we have even received emails telling us this book was a life changer because it helped get them a great new job.  I don’t think anyone can get a job just from my book.  It was the individual who studied hard and took it seriously, and was determined to learn something new.  The book might have helped guide them and show them the topics to study, but they spent their own energy on it.  It was their own skills that helped them pass the exam. So, in this very first installment, I would like to thank the readers for accepting our book, for giving it great reviews and for using it and sharing it.  Our goal in writing this book was to help others, and it seems like we succeeded. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Optimizing Memory Usage in a .NET Application with ANTS Memory Profiler

    Most people have encountered an OutOfMemory problem at some point or other, and these people know that tracking down the source of the problem is often a time-consuming and frustrating task. Florian Standhartinger gives us a walkthrough of how he used the ANTS Memory Profiler to help make an otherwise painful task that little bit less troublesome.

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • Increase the size of a memory mapped file

    - by sandun dhammika
    I am maintaning a memory mapped file to store my tree like datastructure. When I'm updating the datastructure ,I got this problem. The file is limited on it's size and can't be too long or too small. I have a methods like void mapfile_insert_record(RECORD* /* record*/); void mapfile_modify_record(RECORD* /* record*/); Both operations could lead to exceed the space which is free on memory file. How do I overcome this? What strategy I should use. calculate whether it requires to exceed the file as a pre-condition on both methods. Dynamically exceed it , for a example manage a timer and constantly polling file for it's free avaliable size and then automatically extend it. Any ideas or patterns to overcome this problem?

    Read the article

  • ANTS Memory Profiler 7.0

    - by Sam Abraham
    In the next few lines, I would like to briefly review ANTS Memory Profiler 7.0.  I was honored to be extended the opportunity to review this valuable tool as part of the GeeksWithBlogs influencers Program, a quarterly award providing its recipients access to valuable tools and enabling them with an opportunity to provide a brief write-up reviewing the complimentary tools they receive.   Typical Usage   ANTS Memory Profiler 7.0 is very intuitive and easy to use for any user be it novice or expert. A simple yet comprehensive menu screen enables the selection of the appropriate program type to profile as well as the executable or site for this program.   A typical use case starts with establishing a baseline memory snapshot, which tells us the initial memory cost used by the program under normal or low activity conditions. We would then take a second snapshot after the program has performed an activity which we want to investigate for memory leaks. We can then compare the initial baseline snapshot against the snapshot when the program has completed processing the activity in question to study anomalies in memory that did not get freed-up after the program has completed its performed function. The following are some screenshots outlining the selection of the program to profile (an executable for this demonstration’s purposes).   Figure 1 - Getting Started   Figure 2 - Selecting an Application to Profile     Features and Options   Right after the second snapshot is generated, Memory Profiler gives us immediate access to information on memory fragmentation, size differences between snapshots, unmanaged memory allocation and statistics on the largest classes taking up un-freed memory space.   We would also have the option to itemize objects held in memory grouped by object types within which we can study the instances allocated of each type. Filtering options enable us to quickly narrow object instances we are interested in.   Figure 3 - Easily accessible Execution Memory Information   Figure 4 - Class List   Figure 5 - Instance List   Figure 6-  Retention Graph for a Particular Instance   Conclusion I greatly enjoyed the opportunity to evaluate ANTS Memory Profiler 7.0. The tool's intuitive User Interface design and easily accessible menu options enabled me to quickly identify problem areas where memory was left unfreed in my code.     Tutorials and References  FInd out more About ANTS Memory Profiler 7.0 http://www.red-gate.com/supportcenter/Product?p=ANTS Memory Profiler   Checkout what other reviewers of this valuable tool have already shared: http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/10/ants-memory-profiler-7.0.aspx http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx

    Read the article

  • Memory Management/Embedded Management in C

    - by Sauron
    Im wondering if there is a set or a few good books/Tutorials/Etc.. that go into Memory Management/Allocation Specifically (or at least have a good dedicated section to it) when it comes to C. This is more for me learning Embedded and trying to keep Size down. I've read and Learned C fine, and the "standard" Learning books. However most of the books don't spend a huge amount of time (Understandably since C is pretty huge in general) going into the Finer details about whats going on Down Under. I saw a few on Amazon: http://www.amazon.com/C-Pointers-Dynamic-Memory-Management/dp/0471561525 http://www.amazon.com/Understanding-Pointers-C-Yashavant-Kanetkar/dp/8176563587/ref=pd_sim_b_1 (Not sure how relevant this would be) A specific Book for Embedded that has to do with this would be nice. But Code Samples or...Heck tutorials or anything about this topic would be helpful!

    Read the article

  • web sites leaking memory? IIS 7.5 Windows server 2008 R2

    - by Charles
    I have several web sites on my windows 2008 server that have been working flawlessly for over a year. Just a few days ago I ran into an issue where my server stopped serving up pages on some of these sites for no apparent reason. I dug into it a little more today and I see that some of my sites (they're all asp.net mvc 3.0 sites), are consuming over 460MB of memory. Like I said, this just started the other day after a very long period of time of no issues at all. I have two questions: 1) is there a way to throttle how much memory is consumed by the w3wp process before I can force it to restart (restart the app pool for a particular site) so that it doesn't keep hogging all of the memory? 2) any ideas what could have caused this to start happening?

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • Java memory usage increases when App is used, but doesnt decrease when not being used.

    - by gren
    I have a java application that uses a lot of memory when used, but when the program is not being used, the memory usage doesnt go down. Is there a way to force Java to release this memory? Because this memory is not needed at that time, I can understand to reserve a small amount of memory, but Java just reserves all the memory it ever uses. It also reuses this memory later but there must be a way to force Java to release it when its not needed. System.gc is not working.

    Read the article

  • Bad allocation exceptions in C++

    - by me1982
    Hello, In a school project of mine I was requested to create a program not using STL. In the program I use alot of Pointer* = new Something; if (Pointer == NULL) throw AllocationError(); My question is about allocation errors: 1. is there an autamtic exception thrown by new when allocation fails? 2. if so how can I catch it if I'm not using STL (#include "exception.h) 3. is using the NULL testing enugh? thank you. I'm using eclipseCDT(C++) with MinGW on windows 7.

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Memory leak when returning object

    - by Yakattak
    I have this memory leak that has been very stubborn for the past week or so. I have a class method I use in a class called "ArchiveManager" that will unarchive a specific .dat file for me, and return an array with it's contents. Here is the method: +(NSMutableArray *)unarchiveCustomObject { NSMutableArray *array = [NSMutableArray arrayWithArray:[NSKeyedUnarchiver unarchiveObjectWithFile:/* Archive Path */]]; return array; } I understand I don't have ownership of it at this point, and I return it. CustomObject *myObject = [[ArchiveManager unarchiveCustomObject] objectAtIndex:0]; Then, later when I unarchive it in a view controller to be used (I don't even create an array of it, nor do I make a pointer to it, I just reference it to get something out of the array returned by unarchiveCustomIbject (objectAtIndex). This is where Instruments is calling a memory leak, yet I don't see how this can leak! Any ideas? Thanks in advance. Edit: CustomObject initWithCoder added: -(id)initWithCoder:(NSCoder *)aDecoder { if (self = [super init]) { self.string1 = [aDecoder decodeObjectForKey:kString1]; self.string2 = [aDecoder decodeObjectForKey:kString2]; self.string3 = [aDecoder decodeObjectForKey:kString3]; UIImage *picture = [[UIImage alloc] initWithData:[aDecoder decodeObjectForKey:kPicture]]; self.picture = picture; self.array = [aDecoder decodeObjectForKey:kArray]; [picture release]; } return self; }

    Read the article

  • Convert Dynamic to Type and convert Type to Dynamic

    - by Jon Canning
    public static class DynamicExtensions     {         public static T FromDynamic<T>(this IDictionary<string, object> dictionary)         {             var bindings = new List<MemberBinding>();             foreach (var sourceProperty in typeof(T).GetProperties().Where(x => x.CanWrite))             {                 var key = dictionary.Keys.SingleOrDefault(x => x.Equals(sourceProperty.Name, StringComparison.OrdinalIgnoreCase));                 if (string.IsNullOrEmpty(key)) continue;                 var propertyValue = dictionary[key];                 bindings.Add(Expression.Bind(sourceProperty, Expression.Constant(propertyValue)));             }             Expression memberInit = Expression.MemberInit(Expression.New(typeof(T)), bindings);             return Expression.Lambda<Func<T>>(memberInit).Compile().Invoke();         }         public static dynamic ToDynamic<T>(this T obj)         {             IDictionary<string, object> expando = new ExpandoObject();             foreach (var propertyInfo in typeof(T).GetProperties())             {                 var propertyExpression = Expression.Property(Expression.Constant(obj), propertyInfo);                 var currentValue = Expression.Lambda<Func<string>>(propertyExpression).Compile().Invoke();                 expando.Add(propertyInfo.Name.ToLower(), currentValue);             }             return expando as ExpandoObject;         }     }

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Get Object from memory using memory adresse

    - by Hamza Karmouda
    I want to know how to get an Object from memory, in my case a MediaRecorder. Here's my class: Mymic class: public class MyMic { MediaRecorder recorder2; File file; private Context c; public MyMic(Context context){ this.c=context; } private void stopRecord() throws IOException { recorder2.stop(); recorder2.reset(); recorder2.release(); } private void startRecord() { recorder2= new MediaRecorder(); recorder2.setAudioSource(MediaRecorder.AudioSource.MIC); recorder2.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder2.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder2.setOutputFile(file.getPath()); try { recorder2.prepare(); recorder2.start(); } catch (IllegalStateException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } my Receiver Class: public class MyReceiver extends BroadcastReceiver { private Context c; private MyMic myMic; @Override public void onReceive(Context context, Intent intent) { this.c=context; myMic = new MyMic(c); if(my condition = true){ myMic.startRecord(); }else myMic.stopRecord(); } } So when I'm calling startRecord() it create a new MediaRecorder but when i instantiate my class a second time i can't retrieve my Object. Can i retrieve my MediaRecorder with his addresse

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is this how dynamic language copes with dynamic requirement?

    - by Amumu
    The question is in the title. I want to have my thinking verified by experienced people. You can add more or disregard my opinion, but give me a reason. Here is an example requirement: Suppose you are required to implement a fighting game. Initially, the game only includes fighters, who can attack each other. Each fighter can punch, kick or block incoming attacks. Fighters can have various fighting styles: Karate, Judo, Kung Fu... That's it for the simple universe of the game. In an OO like Java, it can be implemented similar to this way: abstract class Fighter { int hp, attack; void punch(Fighter otherFighter); void kick(Fighter otherFighter); void block(Figther otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; This is fine if the game stays like this forever. But, somehow the game designers decide to change the theme of the game: instead of a simple fighting game, the game evolves to become a RPG, in which characters can not only fight but perform other activities, i.e. the character can be a priest, an accountant, a scientist etc... At this point, to make it more generic, we have to change the structure of our original design: Fighter is not used to refer to a person anymore; it refers to a profession. The specialized classes of Fighter (KaraterFighter, JudoFighter, KungFuFighter) . Now we have to create a generic class named Person. However, to adapt this change, I have to change the method signatures of the original operations: class Person { int hp, attack; List<Profession> skillSet; }; abstract class Profession {}; class Fighter extends Profession { void punch(Person otherFighter); void kick(Person otherFighter); void block(Person otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; class Accountant extends Profession { void calculateTax(Person p) { //...implementation...}; void calculateTax(Company c) { //...implementation...}; }; //... more professions... Here are the problems: To adapt to the method changes, I have to fix the places where the changed methods are called (refactoring). Every time a new requirement is introduced, the current structural design has to be broken to adapt the changes. This leads to the first problem. Rigid structure makes it hard for code reuse. A function can only accept the predefined types, but it cannot accept future unknown types. A written function is bound to its current universe and has no way to accommodate to the new types, without modifications or rewrite from scratch. I see Java has a lot of deprecated methods. OO is an extreme case because it has inheritance to add up the complexity, but in general for statically typed language, types are very strict. In contrast, a dynamic language can handle the above case as follow: ;;fighter1 punch fighter2 (defun perform-punch (fighter1 fighter2) ...implementation... ) ;;fighter1 kick fighter2 (defun perform-kick (fighter1 fighter2) ...implementation... ) ;;fighter1 blocks attacks from fighter2 (defun perform-block (fighter1 fighter2) ...implementation... ) fighter1 and fighter2 can be anything as long as it has the required data for calculation; or methods (duck typing). You don't have to change from the type Fighter to Person. In the case of Lisp, because Lisp only has a single data structure: list, it's even easier to adapt to changes. However, other dynamic languages can have similar behaviors as well. I work primarily with static languages (mainly C and Java, but working with Java was a long time ago). I started learning Lisp and some other dynamic languages this year. I can see how it helps improving my productivity.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >