Search Results

Search found 37600 results on 1504 pages for 'database engine'.

Page 49/1504 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • App Engine remote_api with OpenID

    - by hawkettc
    Hi, I've recently tried to switch my app engine app to using openID, but I'm having an issue authenticating with remote_api. The old authentication mechanism for remote_api doesn't seem to work (which makes sense) - I'm getting a 'urllib2.HTTPError: HTTP Error 302: Found', which I assume is appengine redirecting me to the openid login page I've set up. I guess I'm missing something fairly obvious. Currently my remote_api script has the following in it - remote_api_stub.ConfigureRemoteDatastore(app_id=app_id, path='/remote_api', auth_func=auth_func, servername=host, secure=secure) where auth_func is def auth_func(): return raw_input('Username:'), getpass.getpass('Password:') Any ideas what I need to supply to remote_api? I guess similar issues would be encountered with bulkloader too. Cheers, Colin

    Read the article

  • Custom keys for Google App Engine models (Python)

    - by Cameron
    First off, I'm relatively new to Google App Engine, so I'm probably doing something silly. Say I've got a model Foo: class Foo(db.Model): name = db.StringProperty() I want to use name as a unique key for every Foo object. How is this done? When I want to get a specific Foo object, I currently query the datastore for all Foo objects with the target unique name, but queries are slow (plus it's a pain to ensure that name is unique when each new Foo is created). There's got to be a better way to do this! Thanks.

    Read the article

  • JDO Exception in google app engine transaction

    - by Mariselvam
    I am getting the following exception while trying to use transation in app engine datastore. javax.jdo.JDOUserException: Transaction is still active. You should always close your transactions correctly using commit() or rollback(). FailedObject:org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManager@12bbe6b at org.datanucleus.jdo.JDOPersistenceManager.close(JDOPersistenceManager.java:277) The following is the code snippet I used : List<String> friendIds = getFriends(userId); Date currentDate = new Date(); PersistenceManager manager = pmfInstance.getPersistenceManager(); try { Transaction trans = manager.currentTransaction(); trans.begin(); for(String friendId : friendIds) { User user = manager.getObjectById(User.class, friendId); if(user != null) { user.setRecoCount(user.getRecoCount() + 1); user.setUpdatedDate(currentDate); manager.makePersistent(user); } } trans.commit(); } finally { manager.close(); }

    Read the article

  • Custom constructors for models in Google App Engine (python)

    - by Nikhil Chelliah
    I'm getting back to programming for Google App Engine and I've found, in old, unused code, instances in which I wrote constructors for models. It seems like a good idea, but there's no mention of it online and I can't test to see if it works. Here's a contrived example, with no error-checking, etc.: class Dog(db.Model): name = db.StringProperty(required=True) breeds = db.StringListProperty() age = db.IntegerProperty(default=0) def __init__(self, name, breed_list, **kwargs): db.Model.__init__(**kwargs) self.name = name self.breeds = breed_list.split() rufus = Dog('Rufus', 'spaniel terrier labrador') rufus.put() The **kwargs are passed on to the Model constructor in case the model is constructed with a specified parent or key_name, or in case other properties (like age) are specified. This constructor differs from the default in that it requires that a name and breed_list be specified (although it can't ensure that they're strings), and it parses breed_list in a way that the default constructor could not. Is this a legitimate form of instantiation, or should I just use functions or static/class methods? And if it works, why aren't custom constructors used more often?

    Read the article

  • App Engine Bulkloader

    - by gurkan
    Hi all, I am trying to use Bulkloader of google app engine but unfortunately could not understand what to do from documentation. It says add this part to app.yaml builtins: - remote_api: on ok i have added. Then says that i have to execute this command appcfg.py update but i don't have any appcfg.py file. And also what is the command which executes this line? Please somebody tell me what i am missing I use AppEngineLauncher to upload my project to server.. I have naver used a command to update or upload it. Thanks in advance..

    Read the article

  • Google App Engine dev_appserver can't find PIL (I've installed it)

    - by goggin13
    I recently upgraded my Google App Engine launcher on my Mac, running OSX 10.5.8, and afterwards my projects that work with images stopped working locally. It seems to be the same problem that I had when first using GAE locally to work with images, before I installed PIL. Here is the error I get: SystemError: Parent module 'PIL' not loaded I have PIL installed. When I run python normally, I can access it and work with it as expected. I also checked to ensure that dev_appserver.py was running the same version of Python. If I open the interpreter and type sys.version I get this back: 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] This is identical to what I get when I display the sys.version from my projects running through dev_appserver. Any thoughts on why dev_appserver can't find the PIL module? I have been banging my head against this for a bit. Thank you!

    Read the article

  • Keeping connection to APNs open on App Engine using Modules in Go

    - by user3727820
    I'm trying to implement iOS push notifications for a messageboard app I've written (so like notification for new message ect. ect.) but have no real idea where to start. Alot the current documentation seems to be out of date in regard to keeping persistent TLS connections open to the APNs from App Engine and links to articles about depreciated backends I'm using the Go runtime and just keep getting stuck. For instance, the creation of the socket connection to APNs requires a Context which can only be got from a HTTP request, but architecturally this doesn't seem to make a lot of sense because ideally the socket remains open regardless. Is there any clearer guides around that I'm missing or right now is it a better idea to setup a separate VPS or compute instance to handle it?

    Read the article

  • Difficulties with Django on Google App Engine

    - by Rosarch
    I have a Django project that works fine. I'm trying to import it to Google App Engine. I run it on the dev server, and I get an import error: ImportError at / No module named mysite.urls This is the folder structure of mysite/: app.yaml <DIR> myapp index.yaml main.py manage.py <DIR> media settings.py urls.py __init__.py app.yaml: application: mysite version: 1 runtime: python api_version: 1 handlers: - url: .* script: main.py from settings.py: ROOT_URLCONF = 'mysite.urls' What am I doing wrong?

    Read the article

  • Google App Engine datastore-primary key

    - by megala
    Hi, I've created a table in the Google App Engine Datastore. It contains the following FIELDS(GROUPNAME,GROUPID,GROUPDESC). How do I set GROUPID as the primary key? My code is as follows: @Entity @Table(name="group" , schema="PUBLIC") public class Creategroup { @Basic private String groupname; @Basic private String groupid; @Basic private String groupdesc; public void setGroupname(String groupname) { this.groupname = groupname; } public String getGroupname() { return groupname; } public void setGroupid(String groupid) { this.groupid = groupid; } public String getGroupid() { return groupid; } public void setGroupdesc(String groupdesc) { this.groupdesc = groupdesc; } public String getGroupdesc() { return groupdesc; } public Creategroup(String groupname, String groupid, String groupdesc ) { // TODO Auto-generated constructor stub this.groupname = groupname; this.groupid = groupid; this.groupdesc = groupdesc; } }

    Read the article

  • App Engine - Save response from an API in the data store as file (blob)

    - by herrherr
    Hi there, I'm banging my head against the wall with this one: What I want to do is store a file that is returned from an API in the data store as a blob. Here is the code that I use on my local machine (which of course works due to an existing file system): client.convertHtml(html, open('html.pdf', 'wb')) Since I cannot write to a file on App Engine I tried several ways to store the response, without success. Any hints on how to do this? I was trying to do it with StringIO and managed to store the response but then weren't able to store it as a blob in the data store. Thanks, Chris

    Read the article

  • Google App Engine, Java, and HTTP Performance

    - by polyclef
    A friend and I are currently working on a turn-based game with chat with both desktop browser and Android clients, with Google App Engine as the server. We're using the Java API for GAE and using HTTP for communication with the server. We've implemented simple chat functionality, and we're getting undesirable latencies 1-3 seconds from both the browser and Android clients while just posting simple one-word chat messages. My friend thought it would be best to use XMPP instead of HTTP, but we want to use a Google Accounts cookie for authentication from the Android client, and according to the GAE documentation, XMPP clients cannot use a Google Accounts cookie and must use the user's password. Does anyone have any suggestions as to where the latency might be coming from, how to troubleshoot it, and/or what to do about it? Also, is anyone aware of any opensource implementations of chat (or something similar) on GAE done in Java? Can't seem to find any.

    Read the article

  • Feedback on availability with Google App Engine

    - by Ron
    We've had some good experiences building an app on Google App Engine, this first app's target audience are Google Apps users, so no issues there in terms of it being hosted on Google infrastructure. We like it so much that we would like to investigate using it for a another app, however this next project is for a client who is not really that interested in what technology it sits on, they just want it to work, and work all of the time. In this scenario, given that we have the technology applicability and capability side covered, are there any concerns that this stuff is still relatively new and that we may not be as much "in control" as if we had it done with traditional hosting?

    Read the article

  • OpenID login on local development server for google app engine

    - by Alex Jeffery
    Are you able to use open id to log into the local development server with google app engine sdk version 1.4.1 and python 2.5? When I execute this self.redirect(users.create_login_url(continue_url, None, openid_url)) I get redirected to http://localhost/_ah/login rather than the openid url. The openid url and continue url are valid. My app.yaml looks like this - url: /_ah/login_required script: do_openid_login.py - url: /users/(.*) script: routers/user_router.py login: required If I browse to http://localhost/users/ I am also redirected to http://localhost/_ah/login rather than http://localhost/_ah/login_required Is there a config issue or does openid not work locally?

    Read the article

  • Exception in Google App Engine (Java) while trying to create Memcache Object

    - by Shreeni
    I am coming back to an old Google App Engine project on which I saw a bug. During this lag, I have been upgrading my AppEngine SDK and is now set at 1.3. When I try to run the same project again, I see the following exception: java.lang.NoSuchMethodError: com.google.apphosting.api.ApiProxy$Environment.getDefaultNamespace()Ljava/lang/String; at com.google.appengine.api.NamespaceManager.get(NamespaceManager.java:56) at com.google.appengine.api.memcache.MemcacheServiceImpl.setNamespace(MemcacheServiceImpl.java:181) at com.google.appengine.api.memcache.MemcacheServiceImpl.(MemcacheServiceImpl.java:145) at com.google.appengine.api.memcache.MemcacheServiceFactory.getMemcacheService(MemcacheServiceFactory.java:25) The line causing the problem is: CacheManager.getInstance().getCacheFactory().createCache(Collections.emptyMap()); (It is the same line as suggested by the AppEngine documentation to create a memcache object. It used to work fine previously. ) Any suggestions on how to fix it?

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • WPF application with MS Access database as a data source

    - by Kay Zed
    I have a Microsoft Access 2010 database. Now, using Visual Studio 2010, I want to create a WPF application and add the database as a data source. The app will have a window with a frame that provides navigation through pages. No problem so far. But: -What is the right way to set up the database in this scenario? Tables only? Or must everything go via queries? (VS2010 talks about views which I assume (?) are queries) -Database data must be updatable and records can be added. Some relationships go through link tables (many-to-many) and there are nullable foreign key relationships. Must I take manual steps to make it work? -While adding the data source VS2010 created an xsd from my Access database. I think the xsd might need further tweaking for the application to work the right way. What if I change my Access database design, I'd have to regenerate the xsd again as well. Is this right, and is it the way it is usually done? OR, should I let the original Access database go and give the application the capability to create new empty databases? -How do you provide controls in a page to step through the records in a table? Is there a special database control? -What is the way (WPF class?) to load records into the data context that displays in a page? (At this level it probably does not matter what type of data source it is.)

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >