Search Results

Search found 37600 results on 1504 pages for 'database engine'.

Page 29/1504 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Which knowledge base/rule-based inference engine to choose for real time Runway incursion prevention

    - by Piligrim
    Hello, we are designing a project that would listen to dialog between airport controllers and pilots to prevent runway incursions (eg. one airplane is taking off while other is crossing the runway). Our professor wants us to use Jena for knowledge base (or anything else but it should be some sort of rule-based engine). Inference is not the main thing in Jena and there's not much documentation and examples of this. So we need an engine that would get messages from pilots as input and output possible risks of incursion or any other error in message protocol. It should be easy to write rules, and should be easy to provide engine with real time data. I image it something like this: A pilot sends a message that he lands on some runway, the system remembers that the runway is busy and no one should cross it If someone is given an instruction to cross this runway, the engine should fire a rule that something is wrong When the pilot sends a message that he left the runway and goes to the gate, the system clears the runway and lets other planes to use it. So is Jena, or prolog or any other rules engine suitable for this? I mean it is suitable, but do we really need to use it? I asked the prof. if we could just keep state of the runway and use some simple checks based on messages we receive and he said that it is not scalable and we need the knowledge base. Can someone give me any advise on which approach to use for this system? If you recommend k.b., then which one should we use? The project is written in java. Thank you.

    Read the article

  • ASP.NET MVC View Engine Resolution Sequence

    - by intangible02
    I created a simple ASP.NET MVC version 1.0 application. I have a ProductController which has one action Index. In the view, I created a corresponding Index.aspx under Product subfolder. Then I referenced the Spark dll and created Index.spark under the same Product view folder. The Application_Start looks like protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new Spark.Web.Mvc.SparkViewFactory()); ViewEngines.Engines.Add(new WebFormViewEngine()); } My expectation is that since the Spark engine registers before default WebFormViewEngine, when browse the Index action in Product controller, the Spark engine should be used, and WebFormViewEngine should be used for all other urls. However, the test shows that the Index action for Product controller also uses the WebFormViewEngine. If I comment out the registration of WebFormViewEnginer (the last line in the code), I can see that the Index action is rendered by Spark engine and the rest urls generates an error (since the defualt engine is gone), it proves that all my Spark code is correct. Now my question is how the view engine is resolved? Why the registration sequence does not take effect?

    Read the article

  • Google app engine for static files in joomla

    - by vipinsahu
    hi i want to use google app engine for the static data for my joomla website currently my site is http://webkul.com i want to put all the css and js files in the app engine please help i am using google app engine first time if there is any good tutorial please put it here thanks

    Read the article

  • Does Google App Engine support ftp ?

    - by Frank
    Now I use my own Java FTP program to ftp objects from my PC to my ISP's website server. I want to use Google App Engine's servlet to get Paypal IPN messages, then store the messages into my own objects and ftp the objects to my ISP's website server, is this doable ? I heard Google App Engine doesn't support FTP. I don't expect Google to do it for me, but can I use my own Java FTP program in the web app that I upload onto the App Engine to do it ? Frank

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Installing Sphinx on App Engine - possible?

    - by Pekka
    Following up on my last year's question on documentation, I now want to get started and try out Python-based Sphinx for putting together the developer documentation for a PHP CMS I've been working on. Instead of setting up Python locally on my workstation, I would like to run it on a publicly accessible web server from the start. All the web hosting packages I have access to run on the LAMP stack, and I'm reluctant to buy Python-based hosting. I am very interested in the Google App Engine, the free quotas they provide will do for me a hundred times over, and even if not, their pricing looks very reasonable. Now I have zero knowledge of Python - getting Sphinx to work would be my first contact with it - and very little time. As far as I understand, the platform and python libraries the App Engine provides are very compatible to a standard Python library but not identical. So my question is: Can Sphinx run on App Engine at all? Is installing Sphinx on the App Engine as straightforward as if I would install it on top of a normal Python installation? Or will the App Engine's environment require tweaking of the source code that I can't perform in reasonable time with my current level of Python? Should I be installing Sphinx on a local server and a "normal" Python stack instead first? Does anybody know any helpful How-to's, tutorials or other resources for this?

    Read the article

  • Meta Search Engine Architecture

    - by Loki
    The question wasn't clear enough, I think; here's an updated straight to the point question: What are the common architectures used in building a meta search engine and is there any libraries available to build that type of search engine? I'm looking at building an "enterprise" type of search engine where the indexed data could be coming from proprietary (like Autonomy or a Google Box) or public search engines (like Google Web or Yahoo Web).

    Read the article

  • Is there any open source AI engine?

    - by Andrei Savu
    I am searching for an open source AI engine implemented in C/C++, ActionScript or Java with no success. Do you know any open source implementation? Update: Thanks for answers! I had no idea how vast the AI field is. I am working on a sample application. I want to add intelligent behavior over a physics engine. I need some sort ai engine designed for games.

    Read the article

  • Problems with i18n using django translation on App-Engine with Korean and Hindi

    - by Greg
    I've got a setup based on the post here, and it works perfectly. Adding more languages to the mix, it recognises them fine, except for Korean (ko) and Hindi (hi). Chinese/Japanese/Hebrew are all fine, so nothing to do with encodings/charsets I don't think. Taking a look into the django code inside the app-engine SDK, I notice that all the languages that I'm using except for ko and hi are ones that ship with django - in the default settings.py and inside the locale folder they are missing. If I copy one of the locale folders inside the /usr/local/google_appengine/lib/django[...]/conf/locale and rename it to be 'ko', then it starts working in my app, but I won't be able to replicate this modification when I deploy to app-engine, so need a bit of help understanding what I might be doing wrong. my settings.py is definitely being taken into account, as if I remove languages from there then they stop working (as they should). If I copied the django modules into my app, under 'lib' there say, could I use those instead of the ones app-engine tries to use, maybe? I'm brand new to python/django/app-engine, and developing on a Mac with Leopard, if that makes any difference. I have the latest app-engine SDK as of tuesday.

    Read the article

  • Text to a PNG on App Engine (Python)

    - by Bemmu
    Note: I am cross-posting this from App Engine group because I got no answers there. As part of my site about Japan, I have a feature where the user can get a large PNG for use as desktop background that shows the user's name in Japanese. After switching my site hosting entirely to App Engine, I removed this particular feature because I could not find any way to render text to a PNG using the image API. In other words, how would you go about outputting an unicode string on top of an image of known dimensions (1024x768 for example), so that the text will be as large as possible horizontally, and centered vertically? Is there a way to do this is App Engine, or is there some external service besides App Engine that could make this easier for me, that you could recommend (besides running ImageMagick on your own server)?

    Read the article

  • 4GB limitation on these embedded/express DBs good enough? what's next if limitation is reached?

    - by edwin.nathaniel
    I'm wondering how long a (theoretically) desktop-app can consume the full 4GB limitation of these express/embedded database products (SQL-Server Express, Oracle Express, SQLite3, etc) provided that big blobs will be stored in filesystem. Also what would be your strategy when it hits the 4GB? Archive the old DB Copy 1-3 months of data to the new DB (consider this as cache strategy?) Start using the new DB from this point onward (How do you access the old data?) I understand that the answer might varies depending on how much data you stored in the table/column. But please describe based on your experience (what kind of desktop-app, write/read heavy, how long will it reach according to your guess).

    Read the article

  • Using an observer within an Engine

    - by Tim
    I've created an Engine which is basically used for all of our projects. Now what I want to do is add a before_create callback to all of the models in this Engine. After some searching I found out that an observer is the way to go. So, I've created this observer: class AuthObserver < ActiveRecord::Observer def before_create( record ) p record end end And now I need to add it to the application, but of course in my Engine there is no such file as application.rb. What I tried is adding it to an initializer located in /config/initializers/observers.rb Like so: Rails.application.config.active_record.observers = :auth_observer But this doesn't work, and it throws no errors. Anybody out here has experience using an observer inside an engine? Thanks a lot!

    Read the article

  • Open Source PHP search engine

    - by Ravi Gupta
    I am looking for an open source search engine plugin written in php for my website(eCommerce). Before anybody answer that I have a doubt regarding the search engine. Usually search engine crawl web pages, create indexes and then use them while looking for contents. But will the same model work for eCommerce websites? Yeah, it can crawl products pages, index them but don't you think it would be better if it crawls the database directly and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? May be what I am asking is a stupid question but I am new to web development, so kindly help me to understand the concept. I have looked at a search engine called Sphider but didn't get what all I have to do to make it work with an eCommerce website.

    Read the article

  • "Cannot perform a differential backup for database "myDb", because a current database backup does no

    - by krimerd
    Hi there, I have what seems to be a pretty common problem when trying to take a differential backup. We have a SQL Server 2008 Standard (64bit) and we use Litespeed v 5.0.2.0 to take our backups. We take full backups once a week and a differential on a daily basis. The problem is, every time I try to take a diff backup I get the following error: "VDI open failed due to requested abort. BACKUP DATABASE is terminating abnormally. Cannot perform a differential backup for database "myDb", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option." The problem is that I know 100% I have a full backup because I just double checked. Only once I was able to take a diff backup and that was when I took it immediately after I took a full backup. I have searched around and noticed that this is pretty common (although mostly with SQL 2005) and a solution that a lot of ppl suggest and that I haven't tried yet is to disable the SQL Server VSS Writer service. The problem with this is #1 I think I might need this service since I am using a third party backup software and #2 I am not sure exactly what the service does and don't want to disable it just like that. Has any of you ever experienced this problem and how did you go about fixing it? Thank you,

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • I cannot connect to database from Drupal

    - by Patrick
    hi, I've uploaded my drupal website (and related database) to my new server. The database info is: host: localhost user: user pass: pass databaseName = database_name I've set the following line in settings.php file: $db_url = 'mysqli://user:password@localhost/database_name'; but what I get is this: If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. I guess the database is running, it always run and I can access with phpmyadmin so I think the problem is not there. The database and website files upload have also been succesfull.. so I dunno what to do to fix this issue. It is mysql on IIS Server thanks

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >