Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 155/1641 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Introducing Next-Generation Enterprise Auditing and Database Firewall Platform Webcast, 12/12/12

    - by Troy Kitch
    Join us, December 12 at 10am PT/1pm ET, to hear about a new Oracle product that monitors Oracle and non-Oracle database traffic, detects unauthorized activity including SQL injection attacks, and blocks internal and external threats from reaching the database. In addition, this new product collects and consolidates audit data from databases, operating systems, directories, and any custom template-defined source into a centralized, secure warehouse. This new enterprise security monitoring and auditing platform allows organizations to quickly detect and respond to threats with powerful real-time policy analysis, alerting and reporting capabilities. Based on proven SQL grammar analysis that ensures accuracy, performance, and scalability, organizations can deploy with confidence in any mode. You will also hear how organizations such as TransUnion Interactive and SquareTwo Financial rely on Oracle today to monitor and secure their Oracle and non-Oracle database environments. Register for the webcast here.

    Read the article

  • Should I create an Enum mapping to my database table

    - by CrazyHorse
    I have a database table containing a list of systems relevant to the tool I am building, mostly in-house applications, or third-party systems we receive data from. This table is added to infrequently, approx every 2 months. One of these systems is Windows itself, which is where we store our users' LANs, and I now need to explicitly reference the ID relating to Windows to query for user name, team etc. I know it would be bad practice to embed the ID itself into the code, so my question is what would be the best way to avoid this? I'm assuming my options are: create a global constant representing this ID create a global enum containing all systems now create a global enum and add systems to it as & when they are required in the code retrieve the ID from the database based on system name I appreciate this may seem a trivial question, but I am going to have many situations like this during the course of this build, and although we are in complete control of the database I would like to conform to best practice as far as possible. Many thanks!

    Read the article

  • Analyzing I/O Characteristics and Sizing Storage Systems for SQL Server Database Applications

    Understanding how to analyze the characteristics of I/O patterns in the Microsoft® SQL Server® data management software and how they relate to a physical storage configuration is useful in determining deployment requirements for any given workload. A well-performing I/O subsystem is a critical component of any SQL Server application. I/O subsystems should be sized in the same manner as other hardware components such as memory and CPU. As workloads increase it is common to increase the number of CPUs and increase the amount of memory. Increasing disk resources is often necessary to achieve the right performance, even if there is already enough capacity to hold the data. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • How to Identify and Backup the Latest SQL Server Database in a Series

    I have to support a third party application that periodically creates a new database on the fly. This obviously causes issues with our backup mechanisms. The databases have a particular pattern for naming, so I can identify the set of databases, however, I need to make sure I'm always backing up the newest one. Read this tip to ensure you are backing up your latest database in a series. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Database Insider - October 2012 issue

    - by Javier Puerta
    The October issue of the Database Insider newsletter is now available. (Full newsletter here) NEWS   Newly Launched Oracle Exadata X3 Redefines Extreme Performance At Oracle OpenWorld 2012, Oracle announced the general availability of Oracle Exadata Database Machine X3, a complete package of servers, storage, networking, and software that is massively scalable, secure, and fully redundant—and ideally suited for the varied and unpredictable workloads of cloud computing. Read More WEBCASTS What Are Oracle Users Doing to Improve Availability and Disaster Recovery? The Independent Oracle Users Group (IOUG) surveyed more than 350 data managers and professionals regarding planned and unplanned downtime, database high availability, and disaster recovery solutions. Download the report and watch the Webcast today.

    Read the article

  • Lipoaspiration in your SQL Server Database

    Once, when disk space was at a premium, DBAs fought hard to keep the size of their database down. Now there seems less motivation to 'fight the flab' of a database. Fabiano Amorim was watching television recently when the subject matter, cosmetic surgery, gave him the theme and inspiration for this guide to keeping your database fit and trim. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Ubuntu 12.04 tilts when trying to open large excel file with libreoffice or matlab

    - by user1565754
    I have an xlsx-file of size 27.3MB and when I try to open it either in Libreoffice or Matlab the whole system slows down My processor is AMD Sempron(tm) 140 Processor (should be about 2.7Ghz) Memory I have about 1.7GB Any ideas? I opened this file in Windows no problem...of course it took a few seconds to load but Ubuntu freezes with this file completely...smaller files of size 3MB, 5MB etc open just fine... thnx for support =)

    Read the article

  • Game Database Connectivity Java

    - by The Kraken
    I'm developing a simple multi-player puzzle game in Java. Both players should be able to view the same game board on his own computer. Then, when one player makes an action in the game (ex. drags an object onto a coordinate space), the game's view should update automatically on the other computer's game screen. I'd like all this to happen over the internet, not requiring both computers to be on the same LAN connection. If I need to use SQL/PHP to accomplish this, I'm unsure how to design the database to accomplish something as simple as the following: Player A drags element onscreen Game sends coordinates of element to database/server Player B's computer detects a change to an item in the database Player B's computer grabs the coordinates of Player A's item Player B's machine draws onscreen elements at the received coordinates Could somebody point me in the right direction?

    Read the article

  • iPhone app memory leak with UIImage animation? Problem testing on device

    - by user157733
    I have an animation which works fine in the simulator but crashes on the device. I am getting the following error... Program received signal: “0”. The Debugger has exited due to signal 10 (SIGBUS) A bit of investigating suggests that the UIImages are not getting released and I have a memory leak. I am new to this so can someone tell me if this is the likely cause? If you could also tell me how to solve it then that would be amazing. The images are 480px x 480px and about 25kb each. My code is below... NSArray *rainImages = [NSArray arrayWithObjects: [UIImage imageNamed:@"rain-loop0001.png"], [UIImage imageNamed:@"rain-loop0002.png"], [UIImage imageNamed:@"rain-loop0003.png"], [UIImage imageNamed:@"rain-loop0004.png"], [UIImage imageNamed:@"rain-loop0005.png"], [UIImage imageNamed:@"rain-loop0006.png"], //more looping images [UIImage imageNamed:@"rain-loop0045.png"], [UIImage imageNamed:@"rain-loop0046.png"], [UIImage imageNamed:@"rain-loop0047.png"], [UIImage imageNamed:@"rain-loop0048.png"], [UIImage imageNamed:@"rain-loop0049.png"], [UIImage imageNamed:@"rain-loop0050.png"], nil]; rainImage.animationImages = rainImages; rainImage.animationDuration = 4.15/2; rainImage.animationRepeatCount = 0; [rainImage startAnimating]; [rainImage release]; Thanks

    Read the article

  • Memory Leaks when touching UITableViewCells and poping off view.

    - by Falcon
    Hi All, I'm currently having a problem where the leaks tool is reporting a slew of memory leaks after clicking on cells within a UITableView and then hitting the back button and popping off the view. Majority of the leaks reported can not be traced back to any specific location in my code, they are: Leaked Object # Address Size Responsible Library Responsible Frame NSCFArray 2 < multiple > 64 UIKit -[UITouch(UITouchInternal) UITouch 2 < multiple > 128 GraphicsServices PurpleEventCallback Malloc 48 Bytes 2 < multiple > 96 Foundation -[NSCFArray insertObject:atIndex:] UIDelayedAction 2 < multiple > 96 UIKit -[UILongPressGestureRecognizer startTimer] NSCFArray 2 < multiple > 64 UIKit -[UILongPressGestureRecognizer touchesBegan:withEvent:] Malloc 32 Bytes 2 < multiple > 64 Foundation -[NSCFArray insertObject:atIndex:] Malloc 16 Bytes 2 < multiple > 32 Foundation -[NSCFSet unionSet:] Now I have commented out all my code in any touch event functions that I have written and it still leaks if I click on the cell a few times and then hit the back button to return to the previous view. Any ideas on what might actually be the problem here? Thanks,

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Will this class cause memory leaks, and does it need a dispose method? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting frequent crashes of the app pool when it is in use. Thanks.

    Read the article

  • Will this class cause memory leaks, and does anything need disposing of? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting the app pool falling over frequently when it is in use. Thanks.

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • Does the pointer to free() have to point to beginning of the memory block, or can it point to the interior?

    - by Lambert
    The question is in the title... I searched but couldn't find anything. Edit: I don't really see any need to explain this, but because people think that what I'm saying makes no sense (and that I'm asking the wrong questions), here's the problem: Since people seem to be very interested in the "root" cause of all the problem rather than the actual question asked (since that apparently helps things get solved better, let's see if it does), here's the problem: I'm trying to make a D runtime library based on NTDLL.dll, so that I can use that library for subsystems other than the Win32 subsystem. So that forces me to only link with NTDLL.dll. Yes, I'm aware that the functions are "undocumented" and could change at any time (even though I'd bet a hundred dollars that wcstombs will still do the same exact thing 20 years from now, if it still exists). Yes, I know people (especially Microsoft) don't like developers linking to that library, and that I'll probably get criticized for the right here. And yes, those two points above mean that programs like chkdsk and defragmenters that run before the Win32 subsystem aren't even supposed to be created in the first place, because it's literally impossible to link with anything like kernel32.dll or msvcrt.dll and still have NT-native executables, so we developers should just pretend that those stages are meant to be forever out of our reaches. But no, I doubt that anyone here would like me to paste a few thousand lines of code and help me look through them and try to figure out why memory allocations that aren't failing are being rejected by the source code I'm modifying. So that's why I asked about a different problem than the "root" cause, even though that's supposedly known to be the best practice by the community. If things still don't make sense, feel free to post comments below! :)

    Read the article

  • BufferedImage.getGraphics() resulting in memory leak, is there a fix?

    - by user359202
    Hi friends, I'm having problem with some framework API calling BufferedImage.getGraphics() method and thus causing memory leak. What this method does is that it always calls BufferedImage.createGraphics(). On a windows machine, createGraphics() is handled by Win32GraphicsEnvironment which keeps a listeners list inside its field displayChanger. When I call getGraphics on my BufferedImage someChart, someChart's SurfaceManager(which retains a reference to someChart) is added to the listeners map in Win32GraphicsEnvironment, preventing someChart to be garbage collected. Nothing afterwards removes someChart's SurfaceManager from the listeners map. In general, the summarized path stopping a BufferedImage from being garbage collected, once getGraphics is called, is as follows: GC Root - localGraphicsEnvironment(Win32GraphicsEnvironment) - displayChanger(SunDisplayChanger) - listeners(Map) - key(D3DChachingSurfaceManager) - bImg(BufferedImage) I could have changed the framework's code so that after every called to BufferedImage.getGraphics(), I keep a reference to the BufferedImage's SurfaceManager. Then, I get hold of localGraphicsEnvironment, cast it to Win32GraphicsEnvironment, then call removeDisplayChangedListener() using the reference to the BufferedImage's SurfaceManager. But I don't think this is a proper way to solve the problem. Could someone please help me with this issue? Thanks a lot!

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • why is my centos, nginx, php,mysql server using so much memory

    - by kb2tfa
    i'm not sure where the problem lies. I have: centos 6 mysql php php-fpm wordpress (1 site). this is a dedicated server i'm learning on. as soon as i ran the web url the memory slammed to 125% of a 512k server, i had to upgrade to 1gig, so i'm not sure where the problem lies. I thought by switching to nginx from apache i would have more memory free, but i'm still stuck with the problem. before launching the site with nginx and mysqld running the site was about 5% memory. I reanamed my "my-small.cnf" to my.cnf and put in my /etc folder where the original was, but that seems not to have done it. after looking at my TOP results, i'm starting to think it may be php-fpm eating my memory, but not sure. is php-fpm the preferend way or is there something better to use. I read about possible memory leaks in php-fpm. here is what i have: php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf end php-fpm.conf

    Read the article

  • Installing SharePoint 2010 in one machine with built in database

    - by sreejukg
    It is very easy to deploy SharePoint 2010 in a single server using the built-in database. Normally one need to choose such installation for evaluation purposes. When installing with default settings, setup installs Microsoft SQL server 2008 express database along with SharePoint. After installing SharePoint, you need to run SharePoint products and technology configuration wizard which will install central admin website and creates the configuration database and content database for SharePoint sites. Limitations 1. You can not perform this installation on a domain controller 2. The maximum size for express edition database is 4 GB SharePoint 2010 only supports 64 bit operating systems. The installation steps are for windows server r2 64 bit enterprise edition. Installation steps The first screen for the installation is as follows As a first step you need to install the s/w prerequisites. Click on the corresponding link Click next, here you have to agree on the license terms. Select the checkbox and then click next. The installation will starts. The progress will be updated in the screen. This may take some time as during this process, there are some components needs to be downloaded from internet. Make sure you are connected to the internet, then only the installation will become a success. If any error occurs, it will display the error, you need to configure in order to continue. If everything ok you will receive the following success page. Click finish to exit the installation window. Now from the first screen, select Install SharePoint server. This will install SharePoint and SQL server 2008 express edition. First you need to enter the product key for SharePoint. Enter the product key and clicks continue. Now you need to accept the license agreement. Select the checkbox and click on continue. Select the installation type you want.   Now click on the standalone button. In production scenario, you need to select the server farm installation. This article only cover the first option, installing server farm is not in the scope of this article. Once you click on the standalone, the installation starts and you can view the progress as below. If any error occurred during installation, you will get the details and link to the log file. Refer log file and fix the corresponding issue and then start the installation again. If installation completes without any error, you will see the below screen. Make sure you selected the check box “Run the SharePoint products Configuration Wizard now” and click close. The SharePoint products configuration wizard starts. Click next; you will get the following warning Click yes and the configuration steps starts. You can view the progress for each step. Once completed the below screen appears to the user. Click finish to complete the installation. Now SharePoint installation is completed. You can navigate to SharePoint central administration website from the administrative tools and start building your portal. Good luck

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • i want to access mysql database table on given conditions in drop down menu [on hold]

    - by user3909877
    as the code below is accesing the database table directly but i want it to display the table content on giving conditions in drop down menu like when i select islamabad in one drop down menu and lahore in other as given in code and press search buttonn then it display the table flights.but it is displaying it directly <p class="h2">Quick Search</p> <div class="sb2_opts"> <p> </p> <form method="post" action=""> <p>Enter your source and destination.</p> <p> From:</p> <select name="from"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <p> To:</p> <select name="To"> <option value="Islamabad">Islamabad</option> <option value="Lahore">Lahore</option> <option value="murree">Murree</option> <option value="Muzaffarabad">Muzaffarabad</option> </select> <input type="submit" value="search" /> </form> </form> </table> <?php $from = isset($_POST['from'])?$_POST['from']:''; $to = isset($_POST['to'])?$_POST['to']:''; if( $from =='Islamabad'){ if($to == 'Lahore'){ $db_host = 'localhost'; $db_user = 'root'; $database = 'homedb'; $table = 'flights'; if (!mysql_connect($db_host, $db_user)) die("Can't connect to database"); if (!mysql_select_db($database)) die("Can't select database"); $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $result = mysql_query("SELECT * FROM {$table}"); if (!$result) { die("Query to show fields from table failed"); } $fields_num = mysql_num_fields($result); echo "<h1>Table: {$table}</h1>"; echo "<table border='1'><tr>"; while($row = mysql_fetch_row($result)) { echo "<tr>"; // $row is array... foreach( .. ) puts every element // of $row to $cell variable foreach($row as $cell) echo "<td>$cell</td>"; echo "</tr>\n"; } } } mysqli_close($con); ?>

    Read the article

  • Mongodb vs. Cassandra

    - by ming yeow
    I am evaluating what might be the best migration option. Currently, i am on a sharded mysql (horizontal partition), with most of my data stored in json blobs. I do not have any complex SQL queries( already migrated away after since I partitioned my db) Right now, it seems like both Mongodb and Cassandra would be likely options. My situation lots of reads in every query, less regular writes not worried about "massive" scalability more concerned about simple setup, maintenance and code minimize hardware/server cost

    Read the article

  • Postgres user drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • Postgres user/role drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >