Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 26/1641 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Small standalone SQL database similar to access in the old days(ie file database)

    - by Ian
    Hi, I am looking for a easy to use and deploy sql type database i can ship with a desktop application. This will be a small application user's can download from my website. In the vb6 days, access was the common database for small desktop apps, what is my option these days? Looking at SQL CE it seems to have a quite a few limitations such as count(distinct) etc SQL express needs to be installed and running as a service (could i include the SQL express deployments in my deployment so the user doesn't even know its been installed? I assume size would then be an issue) SQL 2005/2008 is not an option due to size and licensing restrictions. I would like to use c#, wpf and entity framework. What would seem to be the best options based on your knowledge and experience? Thanks

    Read the article

  • How to Design a SaaS Database

    - by Josh Curren
    I have a web app that I built for a trucking company that I would like to offer as SaaS. What is the best way to design the database? Should I create a new database for each company? Or should I use one database with tables that have a prefix of the company name? Or should I Use one database with one of each table and just add a company id field to the tables? Or is there some other way to do it?

    Read the article

  • Retrieve Heap memory size and its usage statistics etc...?

    - by AKN
    Lets say I open some application or process. Did some work with that. Now I closed it. Need to know whether this application caused any memory leak. i.e used up some heap memory and not cleared it properly. Can I get this statistics some how? I'm using Visual Studio (for development) under Windows OS. Even I would be interested in knowing this information for any 3rd party application.

    Read the article

  • Where to check for heap memory size and its usage statistics etc... in windows?

    - by AKN
    Lets say I open some application or process. Did some work with that. Now I closed it. Need to know whether this application caused any memory leak. i.e used up some heap memory and not cleared it properly. Can I get this statistics some how? I'm using Visual Studio (for development) under Windows OS. Even I would be interested in knowing this information for any 3rd party application.

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 11gR2 DB 11.2.0.1 Certified with E-Business Suite on Solaris 10 (x86-64)

    - by Steven Chan
    Oracle Database 11g Release 2 version 11.2.0.1 is now certified with Oracle E-Business Suite 11i (11.5.10.2) and Release 12 (12.0.4 or higher, 12.1.1 or higher) on Oracle Solaris on x86-64 (64-bit) running Solaris 10. This announcement includes:Oracle Database 11gR2 version 11.2.0.1 Oracle Database 11gR2 version 11.2.0.1 Real Application Clusters (RAC) Transparent Data Encryption (TDE) Column Encryption with EBS 11i and R12Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for E-Business Suite 11i and R12 Database Instances Transparent Data Encryption (TDE) Tablespace Encryption

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Crash on iOS when system purges memory and closes a UIViewController

    - by Amiramix
    My application is crashing when one of the views is purged from memory because of low-memory condition. At least this is what I understand from the crashlog. It happens on numerous screens but only when opening a Facebook dialog (using the Facebook SDK). Basically, looks like the system sometimes runs out of memory when we need to present a Facebook dialog (e.g. to let user post something on the Facebook timeline). Date/Time: 2012-03-14 19:47:33.819 +0000 OS Version: iPhone OS 5.1 (9B176) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x30000008 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libobjc.A.dylib 0x30f2bf78 objc_msgSend + 16 1 MyApp 0x00003c0e -LTBaseViewController viewDidUnload (LTBaseViewController.m:145) 2 MyApp 0x00004ea2 -LTBaseTableViewController viewDidUnload (LTBaseTableViewController.m:90) 3 UIKit 0x33766bd8 -[UIViewController unloadViewForced:] + 244 4 UIKit 0x338ae492 -[UIViewController purgeMemoryForReason:] + 58 5 Foundation 0x3071a4f8 __57-NSNotificationCenter addObserver:selector:name:object:_block_invoke_0 + 12 6 CoreFoundation 0x30e95540 ___CFXNotificationPost_block_invoke_0 + 64 7 CoreFoundation 0x30e21090 _CFXNotificationPost + 1400 8 Foundation 0x3068e3e4 -NSNotificationCenter postNotificationName:object:userInfo: + 60 9 Foundation 0x3068fc14 -NSNotificationCenter postNotificationName:object: + 24 10 UIKit 0x3387926a -UIApplication _performMemoryWarning + 74 11 UIKit 0x33879364 -UIApplication _receivedMemoryNotification + 168 12 libdispatch.dylib 0x36a12252 _dispatch_source_invoke + 510 13 libdispatch.dylib 0x36a0fb1e _dispatch_queue_invoke$VARIANT$up + 42 14 libdispatch.dylib 0x36a0fe64 _dispatch_main_queue_callback_4CF$VARIANT$up + 152 15 CoreFoundation 0x30e9c2a6 __CFRunLoopRun + 1262 16 CoreFoundation 0x30e1f49e CFRunLoopRunSpecific + 294 17 CoreFoundation 0x30e1f366 CFRunLoopRunInMode + 98 18 GraphicsServices 0x33fb6432 GSEventRunModal + 130 19 UIKit 0x336f5e76 UIApplicationMain + 1074 20 MyApp 0x00004818 main (main.m:16) 21 MyApp 0x000023b4 0x1000 + 5044 I checked and there are almost no memory leaks, e.g. when testing the app for an hour the total memory leaked was around 2-3Kb caused by some string-copying libraries. So I don't believe this is caused by the application. I guess that when the phone is not restarted for some time there are applications running in the background and when using Facebook SDK the memory becomes a problem and the system tries to recover the memory from random applications, including my application. My question is, how can I prevent this crash from happening? How should I handle unloadViewForced on a view controller to make the app more robust in low-memory conditions? And lastly, am I right that this crashlog suggests the crash occurred because the system tried to free memory and my application didn't handle it properly? Any help greatly appreciated.

    Read the article

  • C++ std::vector memory/allocation

    - by aaa
    from a previous question about vector capacity, http://stackoverflow.com/questions/2663170/stdvector-capacity-after-copying, Mr. Bailey said: In current C++ you are guaranteed that no reallocation occurs after a call to reserve until an insertion would take the size beyond the value of the previous call to reserve. Before a call to reserve, or after a call to reserve when the size is between the value of the previous call to reserve and the capacity the implementation is allowed to reallocate early if it so chooses. So, if I understand correctly, in order to assure that no relocation happens until capacity is exceeded, I must do reserve twice? can you please clarify it? I am using vector as a memory stack like this: std::vector<double> memory; memory.reserve(size); memory.insert(memory.end(), matrix.data().begin(), matrix.data().end()); // smaller than size size_t offset = memory.size(); memory.resize(memory.capacity(), 0); I need to guarantee that relocation does not happen in the above. thank you. ps: I would also like to know if there is a better way to manage memory stack in similar manner other than vector

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Oracle Database 11g Implementation Specialist - 14 a 16 Março, 2011

    - by Claudia Costa
    OPN Bootcamp Curso de Especialização em Software OracleCaro Parceiro, O novo programa de parcerias da Oracle assenta na Especialização dos seus seus parceiros. No último ano fiscal muitos parceiros já iniciaram as suas especializações nas temáticas a que estão dedicados e que são prioritárias para o seu negócio. Para apoiar o esforço e dedicação de muitos parceiros na obtenção da certificação dos seus recursos, a equipa local de alianças e canal lançou uma série de iniciativas. Entre elas, a criação deste OPN Bootcamp em conjunto com a Oracle University, especialmente dedicado à formação e preparação para os exames de Implementation, obrigatórios para obter a especialização Oracle Database 11g. Este curso de formação tem o objectivo de preparar os parceiros para o exame de Implementation a realizar já no dia 29 de Março, durante o OPN Satellite Event que terá lugar em Lisboa (outros detalhes sobre este evento serão brevemente comunicados). A sua presença neste curso de preparação nas datas que antecedem o evento OPN Satellite, é fundamental para que os seus técnicos fiquem habilitados a realizar o exame dia 29 de Março com a máxima capacidade e possibilidade de obter resultados positivos. Deste modo, no dia 29 de Março, podem obter a tão desejada certificação, com custos de exame 100% suportados pela Oracle. Contamos com a sua presença! Conteúdo: Oracle Database 11g: 2 Day DBA Release 2 + preparação para o exame 1Z0-154 Oracle Database 11g: Essentials Audiência: - Database Administrators - Technical Administrator- Technical Consultant- Support Engineer Pré Requisitos: Conhecimentos sobre sistema operativo Linux Duração: 3 dias + exame (1 dia)Horário: 9h00 / 18h00Data: 14 a 16 de Março Local: Centro de Formação Oracle Pessoas e Processos Rua do Conde Redondo, 145 - 1º - LisboaAcesso: Metro do Marquês de Pombal Custos de participação: 140€ pax/dia = 420€/pax (3 dias)* - Este preço inclui o exame de Implementation *Custo final para parceiro. Já inclui financiamento da equipa de Alianças e Canal Data e Local do Exame: 29 de Março - Instalações da Oracle University _______________________________________________________________________________________ Inscrições Gratuitas. Lugares Limitados.Reserve já o seu lugar : Email   Para mais informações sobre inscrição: Vítor PereiraFixo: 21 778 38 39 Móvel: 933777099 Fax: 21 778 38 40Para outras informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • How can I free all allocated memory at once?

    - by Tommy
    Here is what I am working with: char* qdat[][NUMTBLCOLS]; char** tdat[]; char* ptr_web_data; // Loop thru each table row of the query result set for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru each column of the query result set and extract the data for(col_index = 0; col_index < number_cols; col_index++) { ptr_web_data = (char*) malloc((strlen(Data) + 1) * sizeof(char)); memcpy (ptr_web_data, column_text, strlen(column_text) + 1); qdat[row_index][web_data_index] = ptr_web_data; } } tdat[row_index] = qdat[col_index]; After the data is used, the memory allocated is released one at a time using free(). for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru all columns used for(col_index = 0; col_index < SARWEBTBLCOLS; col_index++) { // Free memory block pointed to by results set array free(tdat[row_index][col_index]); } } Is there a way to release all the allocated memory at once, for this array? Thank You.

    Read the article

  • memory issue iPad 4.2 crashes

    - by Manoj Kumar
    I am developing a application which receives 600-700 KB of XML data from the server. I have to do some manipulations in that data so once received the data the memory increases to 600 KB to 2 M.B. Already view occupied 4 M.B of memory in the application. So while processing the XML data i m doing some manipulation(pre-parsing) and the memory increases to 600 K.B to 2 M.B and finally decreases to 600 K.B. due to increase in memory, application gives the memory warning. While getting memory warning i m releasing all the views in the navigation controller but it releases only 1 M.B of memory. Even though I release all the views the application is crashing. Please help me out in this issue. It happens in iPad 4.2. Thanks in advance

    Read the article

  • Memory address of a variable

    - by dotnetvoyager
    Hi Everyone, Is it possible to get the memory address of a variable in C#. What I am trying to do is very simple. I want to declare variables of type Double, Float, Decimal and assign the value 1.1 to each of these variables. Then I would like to go and see how these values are represented in memory. I need to get the memory address of the variable in order to see how its stored in memory. Once I have the memory address I plan to put a break point in the code and use the Debug - Windows - Memory option in visual studio to see how the numbers are stored in memory. Cheers,

    Read the article

  • Control SQL Server CLR Reserved Memory

    - by Ryu
    I've recently enabled CLR on my 64 bit SQL Server 2005 machine for usage of about 3 procs. When I run the following query to gather some info on memory usage... select single_pages_kb+ multi_pages_kb + virtual_memory_committed_kb as TotalMemoryUsage, virtual_memory_reserved_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR' I get 129 mb MemoryUsage and 6.3 gb Virtual Memory Reserved The total memory of the machine is 21 gig. What does reserved virtual memory mean exactly and how can I control the size that is allocated? 6 gig is overkill for what we're doing and the memory would be much better utilized by the sproc cache. I'm concerned this reserved memory will cause swapping to the page file. Please help me take back control of the memory! Thanks

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >