Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 624/1255 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Understanding CGI and SQL security from the ground up

    - by Steve
    This question is for learning purposes. Suppose I am writing a simple SQL admin console using CGI and Python. At http://something.com/admin, this admin console should allow me to modify a SQL database (i.e., create and modify tables, and create and modify records) using an ordinary form. In the least secure case, anybody can access http://something.com/admin and modify the database. You can password protect http://something.com/admin. But once you start using the admin console, information is still transmitted in plain text. So then you use HTTPS to secure the transmitted data. Questions: To describe to a learner, how would you incrementally add security to the least secure environment in order to make it most secure? How would you modify/augment my three (possibly erroneous) steps above? What basic tools in Python make your steps possible? Optional: Now that I understand the process, how do sophisticated libraries and frameworks inherently achieve this level of security?

    Read the article

  • Storing HTML in MySQL using Java

    - by mpcabd
    Hello there again, So, I'm working on a project now where I should store webpages inside a database, I'm using crawler4j to crawl and Proxool along with MySQL Java Connector to connect to my database. When I tested the application I got: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'HTMLData'. The HTMLData column wasTEXT. When I changed the HTMLData column to LONGTEXT the error was gone, but I'm afraid it might get back in the future. Any idea on how to do that perfectly so I don't worry about that error (or any other similar error) in the future? Thanks :)

    Read the article

  • Django Redundancy

    - by Sunsu
    I've read many things about scaling Django and the new multiple-DB support makes it so much easier. However, I have not been able to find much information on good ways to create a fully redundant system (not just one that scales). I realize there are many things that go into this problem, but the real thing I'm having trouble solving well is Database redundancy. Is it possible to set up a "write slave" using django's new multiple-DB support? If I had IP failover support it seems like having a write slave would help solve the problem. Simple MySQL replication doesn't seem like it will work due to slave lag right? What's the typical method of creating a redundant database system? Any input or guidance you guys have would be greatly appreciated. I realize I could be asking the wrong questions!

    Read the article

  • How to set utf8 in the auto-generated PHP code of flash builder 4 ?

    - by Mark
    HI, PHP problem here (I think): I've just created a Flex (Flash Builder) project with a datagrid linked to a database - the database is all utf8. When I run the project using the auto-generated code in flex4, the non-English part comes like ????? while the English part comes fine. The auto-generated PHP code uses mysqli. I've tried: $this->connection->set_charset('utf8'); or mysqli_query($this->connection,"SET NAMES utf8"); I also tried writing the code myself (I'm not a PHP guy): mysql_query("set names utf8"); was fine - but that's mysql and not mysqli (that's an "i" after the mysql) and I want to use the auto-generated code... any help is appreciated.

    Read the article

  • How do I submit really big amounts of data to a form?

    - by William Calleja
    I have an HTML from that's posting a really big amount of data which is eventually being saved into an SQL Server 2005, the form is as follows: <form name="frmForm" method="post" action="saveData.aspx"> the target page takes the content of a control within the form and saves it to the database through a normal SQL insert statement. However only a portion of the data is being saved. The field in the database is an ntext. Should I use a different field? Or is something happening while I'm transferring from one page to another? Or even still there's something happening when I'm sending the really big SQL statement through c# in saveData.aspx?

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • Is it a good idea to close and open hibernate sessions frequently?

    - by Gaurav
    Hi, I'm developing an application which requires that state of entities be read from a database at frequent intervals or triggers. However, once hibernate reads the state, it doesn't re-read it unless I explicitly close the session and read the entity in a new session. Is it a good idea to open a session everytime I want to read the entity and then close it afterwards? How much of an overhead does this put on the application and the database (we use a c3p0 connection pool also)? Will it be enough to simply evict the entity from the session before reading it again?

    Read the article

  • Human readable URL causes a problem in Ruby on Rails

    - by TK
    I have a basic CRUD with "Company" model. To make the company name show up, I did def to_param name.parameterize end Then I accessed http://localhost:3000/companies/american-express which runs show action in the companies controller. Obviously this doesn't work because the show method is as following: def show @company = Company.find_by_id(params[:id]) end The params[:id] is american-express. This string is not stored anywhere. Do I need to store the short string (i.e., "american-express") in the database when I save the record? Or is there any way to retrieve the company data without saving the string in the database?

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • What is the best way to add attributes to auto-generated entities (using VS2010 and EF4)

    - by Dani
    ASP.NET MVC2 has strong support for using attributes on entities (validation, and extending Html helper class and more). If I generated my Model from the Database using VS2010 EF4 Entity Data Model (edmx and it's cs class), And I want to add attributes on some of the entities. what would be the best practice ? how should I cope with updating the model (adding more fields / tables to the database and merging them into the edmx) - will it keep my attributes or generate a new cs file erasing everything ? (Manual changes to this file may cause unexpected behavior in your application.) (Manual changes to this file will be overwritten if the code is regenerated.)

    Read the article

  • Double Inner Join generates unexpected error

    - by Itamar Marom
    In my database I have three tables: Users: UserID (Auto Numbering), UserName, UserPassword and a few other unimportant fields. PrivateMessages: MessageID (Auto Numbering), SenderID and a few other fields defining the message content. MessageStatus: MessageID, ReceiverID, MessageWasRead (Boolean) What I need is a query to which I input a user's id and I get all the private messages he has received. In addition, I also need to receive each message's sender UserName. For this I wrote the following query: SELECT Users.*, PrivateMessages.*, MessageStatus.* FROM PrivateMessages INNER JOIN Users ON PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageID WHERE MessageStatus.ReceiverID=[@userid]; But for some reason when I try saving it in my Access database, I get the following error (translated to English by me, since my office is in a different language): Syntax error (missing operator) at expression: "PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageI". Any ideas what could cause this? Thanks.

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • PgJDBC: "no suitable driver found" when following tutorial, why?

    - by Celeritas
    I'm writing a Java program that queries a PostgreSQL database. I'm following this example and have trouble here: connection = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/testdb", "mkyong", "123456"); According to the JavaDoc for DriverManager the first string is "a database url of the form jdbc:subprotocol:subname. When I connect to the server I type in psql -h dataserv.abc.company.com -d app -U emp24 and give the password qwe123 (for example sake). What should the first argument of getConnection be? I've tried connection = DriverManager.getConnection( "jdbc:postgresql://dataserv.abc.company.com", "emp24", "qwe123"); and get the run time error: no suitable driver found. I've download JDBC4 Postgresql Driver, Version 9.2-1000.

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • Any active Bold for Delphi users ?

    - by Roland Bengtsson
    What are you using as a persistance framework when programming in Delphi? If the application is growing it soon became really complicated to handle the model in SQL ? Bold is a persistance framework for Delphi win32 that really deserve more attention. I use it daily and using OCL instead of SQL to get data from the database saves a lot of time and debugging. When the model is changed Bold translate this to an SQL script and change the database. EDIT: For those that are interested in Bold for Delphi I have spend this evening on create a site on Google about it. I'm not a guru in html so the design is maybe not so exciting. But I want comments and reactions about the site. You can leave the comments in this thread or at the bottom on the subpage. And the address is... http://sites.google.com/site/boldfordelphi/

    Read the article

  • ODBC Storage Size

    - by dcp3450
    I'm pulling a lot of text from a MS SQL Server database. I'm not getting all the text (which includes some html. The text is stored perfectly on the database. However, when I run the query to get the data It will only pull part of the text. I pull the data using odbc_exec and store using $variable = odbc_result($runquery,"body"). if i display the content with odbc_result_all($runquery) i get part of the content. if I use echo $body; i get part of the content then some garbage and part of the text from the begining. very strange response. Is there a size limit? Any ideas what I'm missing here?

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • mysqli::query returns true on SELECT statement

    - by Travis Pessetto
    I have an application that reads in one of its classes: public function __construct() { global $config; //Establish a connection to the database and get results set $this->db = new Database("localhost",$config["dbuser"],$config["dbpass"],"student"); $this->records = $this->db->query("SELECT * FROM major") or die("ERROR: ".$this->db->error); echo "<pre>".var_dump($this->records)."</pre>"; } My problem is that var_dump shows that $this->records is a boolean. I've read the documentation and see that the SELECT query should return a result set. This is the only query used by the application. Any ideas where I am going wrong?

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >