Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 594/1255 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • LISP: Keyword parameters, supplied-p

    - by echox
    At the moment I'm working through "Practical Common Lisp" from Peter Seibel. In the chapter "Practical: A Simple Database" (http://www.gigamonkeys.com/book/practical-a-simple-database.html) Seibel explains keyword parameters and the usage of a supplied-parameter with the following example: (defun foo (&key a (b 20) (c 30 c-p)) (list a b c c-p)) Results: (foo :a 1 :b 2 :c 3) ==> (1 2 3 T) (foo :c 3 :b 2 :a 1) ==> (1 2 3 T) (foo :a 1 :c 3) ==> (1 20 3 T) (foo) ==> (NIL 20 30 NIL) So if I use &key at the beginning of my parameter list, I have the possibility to use a list of 3 parameters name, default value and the third if the parameter as been supplied or not. Ok. But looking at the code in the above example: (list a b c c-p) How does the lisp interpreter know that c-p is my "supplied parameter"?

    Read the article

  • TypeInitializationException When Getting an NHibernate Session

    - by Paul Johnson
    I’ve run into what appears to be an NHibernate config problem. Basically, I ran up a simple proof of concept persistence integration test using NUnit, the test simply querys an Oracle database and successfully returns the last record received by the underlying table. However, when the assemblies are taken out of the NUnit test environment and deployed as they would be for an actual application build, my call for an NHibernate session results in a ‘TypeInitializationException’ whilst executing the code line: sessionFactory = New Configuration().Configure().BuildSessionFactory() The application is a vb.net console app running against an Oracle 9.2 database, using a ‘coding framework’ published on the web by Bill McCafferty entitled 'NHibernate Best Practices with ASP.NET' (pre S#harp Architecture). I am running version 2.1.2.4000 of NHibernate. Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • Flex+JPA/Hibernate+BlazeDS+MySQL how to debug this monster?!

    - by Zenzen
    Ok so I'm making a "simple" web app using the technologies from the topic, recently I found http://www.adobe.com/devnet/flex/articles/flex_hibernate.html so I'm following it and I try to apply it to my app, the only difference being I'm working on a Mac and I'm using MAMP for the database (so no command line for me). The thing is I'm having some trouble with retrieving/connecting to the database. I have the remoting-config.xml, persistance.xml, a News.java class (my Entity), a NewsService.java class, a News.as class - all just like in the tutorial. I have of course this line in one of my .mxmls: <mx:RemoteObject id="loaderService" destination="newsService" result="handleLoadResult(event)" fault="handleFault(event)" showBusyCursor="true" /> And my remoting-config.xml looks like this (well part of it): <destination id="newsService"> <properties><source>com.gamelist.news.NewsService</source></properties> </destination> NewsService has a method: public List<News> getLatestNews() { EntityManagerFactory emf = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT); EntityManager em = emf.createEntityManager(); Query findLatestQuery = em.createNamedQuery("news.findLatest"); List<News> news = findLatestQuery.getResultList(); return news; } And the named query is in the News class: @Entity @Table(name="GLT_NEWS") @NamedQueries({ @NamedQuery(name="news.findLatest", query="from GLT_NEWS order by new_date_added limit 5 ") }) The handledLoadResult looks like this: private function handleLoadResult(ev:ResultEvent):void { newsList = ev.result as ArrayCollection; newsRecords = newsList.length; } Where: [Bindable] private var newsList:ArrayCollection = new ArrayCollection(); But when I try to trigger: loaderService.getLatestNews(); nothing happens, newsList is empty. Few things I need to point out: 1) as I said I didn't install mysql manually, but I'm using MAMP (yes, the server's running), could this cause some trouble? 2) I already have a "gladm" database and I have a "GLT_NEWS" table with all the fields, is this bad? Basically the question is how am I suppose to debug this thing so I can find the mistake I'm making? I know that loadData() is executed (did a trace()), but I have no idea what happens with loaderService.getLatestNews()... @EDIT: ok so I see I'm getting an error in the "fault handler" which says "Error: Client.Error.MessageSend - Channel.Connect.Failed error NetConnection.Call.Failed: HTTP: Status 404: url: 'http://localhost:8080/WebContent/messagebroker/amf' - "

    Read the article

  • Handling update errors in multiple records in the TClientDataset's ReconcileError method

    - by Fabio Gomes
    I'm trying to use the ReconcileError event to allow the user to correct the data after an update error which occurred in a specific record among others. Example: I have a dataset with one field and 3 records, this field have a unique constraint on the database, then I change one value to conflict when it reaches the database, then I call ApplyUpdates on the Dataset. This will generate an error (violation of unique constraint) in the provider and abort the applyupdates process, returning raAbort in the Action var of the ReconcileError method. In the ReconcileError method I tryied to use: Action := HandleReconcileError(aDataSet, UpdateKind, E); ** EDIT ** After debugging and dumping the DataSet records which were returned from the server, I noticed that there are 2 records in this Dataset, the first is the Old record and the second have all the changes I made to the first record. I'm a bit confused, will I always get this DataSet with 2 records? I thought that it should have only one record with the Old/New values. Thanks.

    Read the article

  • WPF - How do I get an object that is bound to a ListBoxItem back

    - by JonBlumfeld
    Hi there here is what I would like to do. I get a List of objects from a database and bind this list to a ListBox Control. The ListBoxItems consist of a textbox and a button. Here is what I came up with. Up to this point it works as intended. The object has a number of Properties like ID, Name. If I click on the button in the ListBoxItem the Item should be erased from the ListBox and also from the database... <ListBox x:Name="taglistBox"> <ListBox.ItemContainerStyle> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="HorizontalAlignment" Value="Stretch"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBoxItem"> <ContentPresenter HorizontalAlignment="Stretch"/> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Tag" Value="{Binding TagSelf}"></Setter> </Style> </ListBox.ItemContainerStyle> <ListBox.ItemTemplate> <DataTemplate> <Grid HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Button Grid.Column="0" Name="btTag" VerticalAlignment="Center" Click="btTag_Click" HorizontalAlignment="Left"> <Image Width="16" Height="16" Source="/WpfApplication1;component/Resources/104.png"/> </Button> <TextBlock Name="tbtagBoxTagItem" Margin="5" Grid.Column="1" Text="{Binding Name}" VerticalAlignment="Center" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The Textblock.Text is bound to object.Name and the ListBoxItem.Tag to object.TagSelf (which is just a copy of the object itself). Now my questions If I click the button in the listboxItem how do I get the listboxitem and the object bound to it back. In order to delete the object from the database I have to retrieve it somehow. I tried something like ListBoxItem lbi1 = (ListBoxItem)(taglistBox.ItemContainerGenerator.ContainerFromItem(taglistBox.Items.CurrentItem)); ObjectInQuestion t = (ObjectInQuestion) lbi1.Tag; Is there a way to automatically update the contents of the ListBox if the Itemssource changes? Right now I'm achieving that by taglistBox.ItemsSource = null; taglistBox.ItemsSource = ObjectInQuestion; I'd appreciate any help you can give :D Thanks in advance

    Read the article

  • Grails/Groovy taglib handling parsing dynamically inserted tags.

    - by Dan Guy
    Is there a way to have a custom taglib operate on data loaded in a .gsp file such that it picks up any tags embedded in the data stored in the database. For instance, let's say I'm doing: <g:each in="${activities}"> <li>${it.payload}</li> </g:each> And inside the payload, which is coming from the database, is text like "Person a did event <company:event id="15124124">Event Description</company:event>" Can you have a taglib that handles company:event tags on the fly?

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Sharepoint Foundation 2010 development single machine installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Hierarchical data in Linq - options and performance

    - by Anthony
    I have some hierarchical data - each entry has an id and a (nullable) parent entry id. I want to retrieve all entries in the tree under a given entry. This is in a SQL Server 2005 database. I am querying it with LINQ to SQL in C# 3.5. LINQ to SQL does not support Common Table Expressions directly. My choices are to assemble the data in code with several LINQ queries, or to make a view on the database that surfaces a CTE. Which option (or another option) do you think will perform better when data volumes get large? Is SQL Server 2008's HierarchyId type supported in Linq to SQL?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • SQL Server 2008 Management Studio doesn't recognize new Schema

    - by Lieven Cardoen
    I have created a new Schema in a database called Contexts. Now when I want to write a query, Management Studio doesn't recognize the tables that belong to the new Schema. It says: 'Invalid object name Contexts.ContextLibraries'... Transact-SQL: INSERT INTO [Contexts].[ContextLibraries] (ChannelId, [IsSystem]) VALUES (@ChannelId, 1) When I try the same thing on my local database, it does work... Any ideas? I did try to change the Default schema for the user from dbo to Contexts but this doesn't work. Also checked Contexts in Schemas owned by this user without success. Update: Apparently the sql query does work but the editor gives a fault saying the object is invalid.

    Read the article

  • Fluent NHibernate - Unable to parse integer as enum.

    - by Aaron Smith
    I have a column mapped to an enum with a convention set up to map this as an integer to the database. When I run the code to pull the data from the database I get the error "Can't Parse 4 as Status" public class Provider:Entity<Provider> { public virtual Enums.ProviderStatus Status { get; set; } } public class ProviderMap:ClassMap<Provider> { public ProviderMap() { Map(x => x.Status); } } class EnumConvention:IUserTypeConvention { public void Accept(IAcceptanceCriteria<IPropertyInspector> criteria) { criteria.Expect(x => x.Property.PropertyType.IsEnum); } public void Apply(IPropertyInstance instance) { instance.CustomType(instance.Property.PropertyType); } } Any idea what I'm doing wrong?

    Read the article

  • In Drupal 6, is there a way to take a custom field from the latest post to a taxonomy term, and disp

    - by user278457
    The title for this question pretty much sums up what I'm asking. I've got a list of taxonomy terms, and I'm using a view to display the latest post to each one. I'd like to also display a custom field set up in CCK just under this. Currently, I'm just using "date updated" of the taxonomy term itself which was easy to set up in views. I'd like to drill a little deeper and get the custom "event date" field I've added to the content type last posted to the taxonomy term I'm "viewing". I've got a feeling I'm going to have to write my own database query for this. If (I can avoid that){ How do I set up such a view? } Else{ What's the best practice for including lower level database queries alongside views? }

    Read the article

  • Recording user data for heatmap with javascript

    - by Hanpan
    Hi, I was wondering how sites such as crazyegg.com store user click data during a session. Obviously there is some underlying script which is storing each clicks data, but how is that data then populated into a database? It seems to me the simple solution would be to send data via AJAX but when you consider that it's almost impossible to get a cross browser page unload function setup, I'm wondering if there is perhaps some other more advanced way of getting metric data. I even saw a site which records each mouse movement and I am guessing they are definitely not sending that data to a database on each mouse move event. So, in a nutshell, what kind of technology would I need in order to monitor user activity on my site and then store this information in order to create metric data? I am not looking to recreate GA, I'm just very interested to know how this sort of thing is done. Thanks in advance

    Read the article

  • How to customize pickle for django model objects

    - by muudscope
    I need to pickle a complex object that refers to django model objects. The standard pickling process stores a denormalized object in the pickle. So if the object changes on the database between pickling and unpickling, the model is now out of date. (I know this is true with in-memory objects too, but the pickling is a convenient time to address it.) So what I'd like is a way to not pickle the full django model object. Instead just store its class and id, and re-fetch the contents from the database on load. Can I specify a custom pickle method for this class? I'm happy to write a wrapper class around the django model to handle the lazy fetching from db, if there's a way to do the pickling.

    Read the article

  • Connecting to external MySQL DB from a web server not running MySQL

    - by jrb04c
    While I've been working with MySQL for years, this is the first time I've run across this very newbie-esq issue. Due to a client demand, I must host their website files (PHP) on a IIS server that is not running MySQL (instead, they are running MSSQL). However, I have developed the site using a MySQL database which is located on an external host (Rackspace Cloud). Obviously, my mysql_connect function is now bombing because MySQL is not running on localhost. Question: Is it even possible to hit an external MySQL database if localhost is not running MySQL? Apologies for the rookie question, and many thanks in advance.

    Read the article

  • SQLite character encoding for Google Gears

    - by MHD
    We're using jQuery to get a JSON-string from our server (UTF-8 response, also UTF-8 request through jQuery) and put this JSON into a Google Gears WorkerPool. This workerpool processes the JSON and stores it into a Gears database (SQLite). It turns out that, apparently, SQLite stores data using iso-8859-1 rather than UTF-8. Since we're trying to store user names that might contain Cyrillic characters (and others that you might encounter in Europe), this goes horribly wrong. Can anyone tell me how to change the character encoding in either the Gears WorkerPool or the SQLite database that Gears employs? Of course, if I'm looking in the wrong direction with my problem, feel free to offer alternatives! Unfortunately, HTML5 isn't an option as we're supposed to support IE7 primarily.

    Read the article

  • AJAX call in a continuously loop?

    - by Mestika
    Hi, I want to create some kind of AJAX script or call that continuously will check a MySQL database if any new messages has arrived. When there is a new message in the database, the AJAX script should invoke a kind of alert box or message box. I’m not quite a AJAX expert (yet anyway) and have Googled around to find a solution but I’m having a hard time to figure out where to begin. I imagine that it is kind of the same method that an AJAX chat is using to see if any new chat-message has been send. I’ve also tried to search for AJAX (httpxmlrequest) call in a continuously and infinity loop but still haven’t got a solution yet. I hope there is someone, which can help me with such a AJAX script or maybe nudge me in the right direction. Thanks Sincerely Mestika

    Read the article

  • Seeking tuturial: introduction to ODBC with Delphi

    - by mawg
    I have a lot of embedded C/C++/Ada experience and an outdated smattering of Delphi plus some database stuff. Now I have to implement an app in Delphi which can manipulate MySql, Oracle, maybe MS Acess. In short, I need ODBC. I need to programatically created a database, define its structure and populate its contents, then later query its existence and programatically search. I would prefer not to use 3rd party components, unless there is a compelling reason to do so (performance ought not to be an issue for the app, it won't have much data or be run often, at least not in v1.0) . Can anyone point me at a tutorial which can get me up to speed? Thanks

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

  • Invoking active_record error - can not load file in Ruby on Rails

    - by user1623624
    When I try to run rails generate scaffold test the following error always shows C:\Lab\railapps\dbtest>rails generate scaffold test invoke active_record C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `require': Please install the oracle_enhanced_adapter: `gem install activerecord-oracle_enhanced-adapter` (cannot load such file -- active_record/connection_adapters/oracle_enhanced_adapter) (LoadError) from C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `block in require'" I did install gem oci8 then activerecord-oracle-enhanced-adapter. Can you help me by having a look? Thanks a lot. Version information C:\Lab\railapps\dbtest>gem list ruby-oci8 *** LOCAL GEMS *** ruby-oci8 (2.1.2 ruby x86-mingw32, 2.0.6) C:\Lab\railapps\dbtestgem list activerecord-oracle_enhanced-adapter *** LOCAL GEMS *** activerecord-oracle_enhanced-adapter (1.4.1) database.yml under configure development: adapter: oracle_enhanced database: cvrman.cablevision.com username: ruby password: ruby

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >