Search Results

Search found 10329 results on 414 pages for 'berkeley db je'.

Page 26/414 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • (Enterprise GlassFish v3 build 11) Communication link problem (MySQL DB)

    - by user312853
    I get a communication link failure while application tries to establish a connection with DB. [#|2010-04-08T20:09:57.825+0300|SEVERE|glassfish3.0|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=24;_ThreadName=Thread-1;|Cannot connect to database server = com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.|#] Precisely at this string: Statement s = conn.createStatement(); where conn is defined as follows: private static java.sql.Connection conn; For this app I have set a connection pool with default parameters and currently it (app) uses both JPA and direct JDBC queries. Recreation of connection pool gave nothing, connection pool ping gave next message: Ping Connection Pool for pool is Failed. Ping failed Exce ption - Connection could not be allocated because: Communications lin k failure%%%EOL%%%%%%EOL%%%The last packet sent successfully to the s erver was 0 milliseconds ago. The driver has not received any packets from the server. Please check the server.log for more details.%%%EOL %%%Ping failed Exception - Connection could not be allocated because: Communications link failure and flushing the connection pool gave: com.sun.enterprise.admin.cli.CommandException: remote failure: Failed to flush connection pool ... However I can connect to the database from a terminal. Besides I have the same app working on my local machine with identical connection pool settings. Any one has an idea on whats going on or how to solve the trouble?

    Read the article

  • Is there an "embedded DBMS" to support multiple writer applications (processes) on the same db files

    - by Amir Moghimi
    I need to know if there is any embedded DBMS (preferably in Java and not necessarily relational) which supports multiple writer applications (processes) on the same set of db files. BerkeleyDB supports multiple readers but just one writer. I need multiple writers and multiple readers. UPDATE: It is not a multiple connection issue. I mean I do not need multiple connections to a running DBMS application (process) to write data. I need multiple DBMS applications (processes) to commit on the same storage files. HSQLDB, H2, JavaDB (Derby) and MongoDB do not support this feature. I think that there may be some File System limitations that prohibit this. If so, is there a File System that allows multiple writers on a single file? Use Case: The use case is a high-throughput clustered system that intends to store its high-volume business log entries into a SAN storage. Storing business logs in separate files for each server does not fit because query and indexing capabilities are needed on the whole biz logs. Because "a SAN typically is its own network of storage devices that are generally not accessible through the regular network by regular devices", I want to use SAN network bandwidth for logging while cluster LAN bandwidth is being used for other server to server and client to server communications.

    Read the article

  • Automatically generating Regex from set of strings residing in DB C#

    - by Muhammad Adeel Zahid
    Hello Everyone i have about 100,000 strings in database and i want to if there is a way to automatically generate regex pattern from these strings. all of them are alphabetic strings and use set of alphabets from English letters. (X,W,V) is not used for example. is there any function or library that can help me achieve this target in C#. Example Strings are KHTK RAZ given these two strings my target is to generate a regex that allows patterns like (k, kh, kht,khtk, r, ra, raz ) case insensitive of course. i have downloaded and used some C# applications that help in generating regex but that is not useful in my scenario because i want a process in which i sequentially read strings from db and add rules to regex so this regex could be reused later in the application or saved on the disk. i m new to regex patterns and don't know if the thing i m asking is even possible or not. if it is not possible please suggest me some alternate approach. Any help and suggestions are highly appreciated. regards Adeel Zahid

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • create new db in mysql with php syntax

    - by Jacksta
    I am trying to create a new db called testDB2, below is my code. Once running the script all I am getting an error on line 7 Fatal error: Call to undefined function mysqlquery() in /home/admin/domains/domain.com.au/public_html/db_createdb.php on line 7 This is my code <? $sql = "CREATE database testDB2"; $connection = mysql_connect("localhost", "admin_user", "pass") or die(mysql_error()); $result = mysqlquery($sql, $connection) or die(mysql_error()); if ($result) { $msg = "<p>Databse has been created!</p>"; } ?> <HTML> <head> <title>Create MySQL database</title> </head> <body> <? echo "$msg"; ?> </body> </HTML>

    Read the article

  • Problem storing a hash in DB using Storable::nfreeze in Perl

    - by Sam
    I want to insert a hash in the db using Storable::nfreeze but the data is not inserted properly. My code is as follows: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } The thaw is working fine because I tested it withe some rows that have been inserted correctly, but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying: Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 The error comes from the line that have thaw. The nfreeze did not store the hash properly. Can someone point me to what I'm doing wrong in the createOrder subroutine? I know the module version have nothing to do with the problem.

    Read the article

  • add row to a BindingSource gives different autoincrement value from whats saved into DB

    - by Ruben Trancoso
    I have a DataGridView that shows list of records and when I hit a insert button, a form should add a new record, edit its values and save it. I have a BindingSource bound to a DataGridView. I pass is as a parameter to a NEW RECORD form so // When the form opens it add a new row and de DataGridView display this new record at this time DataRowView currentRow; currentRow = (DataRowView) myBindindSource.AddNew(); when user confirm to save it I do a myBindindSource.EndEdit(); // inside the form and after the form is disposed the new row is saved and the bindingsorce position is updated to the new row DataRowView drv = myForm.CurrentRow; avaliadoTableAdapter.Update(drv.Row); avaliadoBindingSource.Position = avaliadoBindingSource.Find("ID", drv.Row.ItemArray[0]); The problem is that this table has a AUTOINCREMENT field and the value saved may not correspond the the value the bindingSource gives in EDIT TIME. So, when I close and open the DataGridView again the new rowd give its ID based on the available slot in the undelying DB at the momment is was saved and it just ignores the value the BindingSource generated ad EDIT TIME, Since the value given by the binding source should be used by another table as a foreingKey it make the reference insconsistent. There's a way to get the real ID was saved to the database?

    Read the article

  • problem storing a hash in DB using Storage::nfreeze Perl

    - by Sam
    Hello, I want to insert a hash in the db using Storage::nfreeze but the data is not inserted properly. the code is as follow: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } the thaw is working fine because I tested it withe some rows that have been inserted correctly. but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying" Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 the error comes from the line that have thaw. the nfreeze did not store the hash properly. Can someone point me to what i m doing wrong in the createOrder subroutine? Thanks in advance. I know the module version have nothing to do with the problem.

    Read the article

  • populatin jComBobox with data from a db tbl(table) column

    - by mnmyles
    Hi guyz? I have a JComboBox which i have named "venue_JCB" i want to load details from a tbl column on form load at the initcomponets() i have declared the method "accessDBinit() ;" and after the initComponets() i have implemented the accessDBinit(); method as follows: ================================code============================ void accessDBinit() //Get Student List in the List Box { try { sql = "SELECT classLevel FROM class_session ORDER BY classLevel"; java.lang.Class.forName("com.mysql.jdbc.Driver"); Connection con2=DriverManager.getConnection("jdbc:mysql:///coast","root",""); Statement stmt2 = (Statement)con2.createStatement(); boolean hasResults = stmt2.execute(sql); if(hasResults) { ResultSet rs2 = stmt2.getResultSet(); if(rs2!=null) { displayResultsinit(rs2); } db.stmt.close(); } } catch(Exception e) { System.out.println(e+"Error in connection"); } } void displayResultsinit(ResultSet r) throws SQLException { ResultSetMetaData rmeta = r.getMetaData(); int numColumns=rmeta.getColumnCount(); while(r.next()) { for(int i=1;i<=numColumns;++i) { if(i<=numColumns) { venue_JCB.addItem(r.getString(i)); } } } } ====================================end of code=================================== the method works (Populate the combobox) but at the output bar i get the following on form load and also whenever i select a component from the combobox ======= error===== java.lang.NullPointerExceptionError in connection could some one please tell me where i am going wrong with the method? Thanks in advance regards Malachi

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • SQL Inner Join : DB stuck

    - by SurfingCat
    I postet this question a few days ago but I didn't explain exactly what I want. I ask the question better formulated again: To clarify my problem I added some new information: I got an MySQL DB with MyISAM tables. The two relevant tables are: * orders_products: orders_products_id, orders_id, product_id, product_name, product_price, product_name, product_model, final_price, ... * products: products_id, manufacturers_id, ... (for full information about the tables see screenshot products (Screenshot) and screenshot orders_products (Screenshot)) Now what I want is this: - Get all Orders who ordered products with manufacturers_id = 1. And the product name of the product of this order (with manufacturers_id = 1). Grouped by orders. What I did so far is this: SELECT op.orders_id, p.products_id, op.products_name, op.products_price, op.products_quantity FROM orders_products op , products p INNER JOIN products ON op.products_id = p.products_id WHERE p.manufacturers_id = 1 AND p.orders_id > 10000 p.orders_id 10000 for testing to get only a few order_id's. But thies query takes much time to get executed if it even works. Two times the sql server stucked. Where is the mistake?

    Read the article

  • DB design for master file in enterprise software

    - by Thang Nguyen
    Dear all. I want to write an enterprise software and now I'm in the DB design phase. The software will have some master data such as Suppliers, Customers, Inventories, Bankers... I considering 2 options: Put each of these on one separate table. The advantage: the table will have all necessary information for that kind of master file (Customer: name, address,.../Inventory: Type, Manufacturer, Condition...). Disadvantage: Not flexible. When I want to have a new type of master data, such as Insurer, I have to design another table. Put all in one table and this table have foreign key to another table which have type of each kind of master data (table 1: id, data_type, code, name, address....; table 2: data_type, data_type_name). Advantage: flexible - if I want more master data such as Insurer, I just put in table 2: code: 002, name: Insurer, and then put detail each insurer into table 1). Disadvantage: table 1 must have sufficient field to store all kind of information including: customer name, address, account, inventory's manufacturer, inventory's quality...). So which method do you usually do (or you think work better). Thank you very much

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • MySQL db Audit Trail Trigger

    - by Natkeeran
    I need to track changes (audit trail) in certain tables in a MySql Db. I am trying to implement the solution suggested here. I have an AuditLog Table with the following columns: AuditLogID, TableName, RowPK, FieldName, OldValue, NewValue, TimeStamp. The mysql stored procedure is the following (this executes fine, and creates the procedure): The call to the procedure such as: CALL addLogTrigger('ProductTypes', 'ProductTypeID'); executes, but does not create any triggers (see the image). SHOW TRIGGERS returns empty set. Please let me know what could be the issue, or an alternate way to implement this. DROP PROCEDURE IF EXISTS addLogTrigger; DELIMITER $ CREATE PROCEDURE addLogTrigger(IN tableName VARCHAR(255), IN pkField VARCHAR(255)) BEGIN SELECT CONCAT( 'DELIMITER $\n', 'CREATE TRIGGER ', tableName, '_AU AFTER UPDATE ON ', tableName, ' FOR EACH ROW BEGIN ', GROUP_CONCAT( CONCAT( 'IF NOT( OLD.', column_name, ' <=> NEW.', column_name, ') THEN INSERT INTO AuditLog (', 'TableName, ', 'RowPK, ', 'FieldName, ', 'OldValue, ', 'NewValue' ') VALUES ( ''', table_name, ''', NEW.', pkField, ', ''', column_name, ''', OLD.', column_name, ', NEW.', column_name, '); END IF;' ) SEPARATOR ' ' ), ' END;$' ) FROM information_schema.columns WHERE table_schema = database() AND table_name = tableName; END$ DELIMITER ;

    Read the article

  • PEAR:DB connection parameters

    - by Markus Ossi
    I just finished my first PHP site and now I have a security-related question. I used PEAR:DB for the database connection and made a separate parameter file for it. How should I hide this parameter file? I found a guide (http://www.kitebird.com/articles/peardb.html) that says: Another way to specify connection parameters is to put them in a separate file that you reference from your main script. ... It also enables you to move the parameter file outside of the web server's document tree, which prevents its contents from being displayed literally if the server becomes misconfigured and starts serving PHP scripts as plain text. I have now put my file in a directory like this /include/db_parameters.inc However, if I go to this URL, the web server shows me the contents of the file including my database username and password. From what I've understood, I should protect this file so, that even though PHP would be served as text, nobody could read this. What does outside of web server's document tree mean here? Put the PHP file out of public_html directory altogether deeper into the server file system? Some CHMOD?

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • PHP Form - Edit & Delete via Text File Db

    - by Jax
    hi, I pieced together the script below from various tutorials, examples, etc... Right now the script currently: Saves Id, Name, Url with a "|" delimiter to a text file Db like: 1|John|http://www.john.com| 2|Mark|http://www.mark.com| 3|Fred|http://www.fred.com| But I'm having a hard time trying to make the "UPDATE" and "DELETE" buttons work. Can someone please post code which will: let me update/save any changed data for that row (for UPDATE button) let me delete that row (for DELETE button) PLEASE copy n paste the code below and try for yourself. I would like to keep the output format of the script below too. thanks D- $file = "data.txt"; $name = $_POST['name']; $url = $_POST['url']; $data = file('data.txt'); $i = 1; foreach ($data as $line) { $line = explode('|', $line); $i++; } if (isset($_POST['submits'])) { $fp = fopen($file, "a+"); fwrite($fp, $i."|".$name."|".$url."|\n"); fclose($fp); } ? '); } ?

    Read the article

  • Invoking a SOAP ( Web Services ) from ORACLE DB

    - by Mousarules
    Dears, Kindly note that I’m trying to invoke a SOAP (web services) from ORACLE DB using pl\sql , after I have done some investigations it says that I have to use the UTL_HTTP package but It didn't work with me !!! Kindly to advice me , where should I exactly place the following SOAP in pl\SQL to be invoked .... is it posible ? SOAP 1.1 The following is a sample SOAP 1.1 request and response. The placeholders shown need to be replaced with actual values. POST /gmgwebservice/service.asmx HTTP/1.1 Host: bulk.umniah.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/SendSMS" <SendSMS xmlns="http://tempuri.org/"> <UserName>string</UserName> <Password>string</Password> <MessageBody>string</MessageBody> <Sender>string</Sender> <Destination>string</Destination> </SendSMS> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <SendSMSResponse xmlns="http://tempuri.org/"> <SendSMSResult>string</SendSMSResult> </SendSMSResponse> --This web services refers to a web site called Bulk Messaging ; the web site sends SMS to a specific mobile number by filling in some text boxes , I need it to be done from ORACLE forms when a specific action occurs ( JOB ) but I don’t know how to use it inside my pl\sql code . Hope that it’s clear ,is there something else I have to mention ?

    Read the article

  • Cant fetch production db results using Google app engine remote_api

    - by Alon
    Hey, im trying to work out with /remote_api with a django-patch app engine app i got running. i want to select a few rows from my online production app locally. i cant seem to manage todo so, everything authenticates fine, it doesnt breaks on imports, but when i try to fetch something it just doesnt print anything. Placed the test python inside my local app dir. #!/usr/bin/env python # import os import sys # Hardwire in appengine modules to PYTHONPATH # or use wrapper to do it more elegantly appengine_dirs = ['myworkingpath'] sys.path.extend(appengine_dirs) # Add your models to path my_root_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, my_root_dir) from google.appengine.ext import db from google.appengine.ext.remote_api import remote_api_stub import getpass APP_NAME = 'Myappname' os.environ['AUTH_DOMAIN'] = 'gmail.com' os.environ['USER_EMAIL'] = '[email protected]' def auth_func(): return (raw_input('Username:'), getpass.getpass('Password:')) # Use local dev server by passing in as parameter: # servername='localhost:8080' # Otherwise, remote_api assumes you are targeting APP_NAME.appspot.com remote_api_stub.ConfigureRemoteDatastore(APP_NAME, '/remote_api', auth_func) # Do stuff like your code was running on App Engine from channel.models import Channel, Channel2Operator myresults = mymodel.all().fetch(10) for result in myresults: print result.key() it doesnt give any error or print anything. so does the remote_api console example google got. when i print the myresults i get [].

    Read the article

  • Error when feeding a mysql db with a python-parsed data

    - by Barnabe
    I use this bit of code to feed some data i have parsed from a web page to a mysql database c=db.cursor() c.executemany( """INSERT INTO data (SID, Time, Value1, Level1, Value2, Level2, Value3, Level3, Value4, Level4, Value5, Level5, ObsDate) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", clean_data ) The parsed data looks like this (there are several hundred such lines) clean_data = [(161,00:00:00,8.19,1,4.46,4,7.87,4,6.54,null,4.45,6,2010-04-12),(162,00:00:00,7.55,1,9.52,1,1.90,1,4.76,null,0.14,1,2010-04-12),(164,00:00:00,8.01,1,8.09,1,0,null,8.49,null,0.20,2,2010-04-12),(166,00:00:00,8.30,1,4.77,4,10.99,5,9.11,null,0.36,2,2010-04-12)] if i hard code the data as above mySQL accepts my request (except for some quibbles about formatting) but if the variable clean_data is instead defined as the result of the parsing code, like this: cleaner = [(""" $!!'""", ')]'),(' $!!', ') etc etc] def processThis(str,lst): for find, replace in lst: str = str.replace(find, replace) return str clean_data = processThis(data,cleaner) then i get the dreaded "TypeError: not enough arguments for format string" After playing with formatting options for a few hours (I am very new to this) I am confused... what is the difference between the hard coded data and the result of the processThis function as fas as mySQL is concerned? Any idea greatly appreciated...

    Read the article

  • Magento, 1 db field not saved

    - by david parloir
    Hi there, I have a problem with 1 field of the db. With this code: $expireMonth = Mage::getStoreConfig('points_options/config_points/expiration_period', Mage::app()->getStore()->getId()); if (!is_null($expireMonth) && ($expireMonth > 0)) { $expireDate = date("Y-m-d H:i:s", strtotime("+" . $expireMonth . " month")); } else { $expireDate = NULL; } //die($expireDate); //store in points history table $this->_pointsModel->setCustomerId($this->_customer->getId()) ->setOrdersId('welcome') ->setPointsPending($pointsForNewCustomer) ->setPointsComment(Mage::helper('points')->__('welcome points')) ->setDateAdded(date('Y-m-d H:i:s')) ->setPointsStatus(2)//confirmed ->setPointsType('WE') ->setStoreId(Mage::app()->getStore()->getId()) ->setExpireDate($expireDate) ->save(); every field is saved in the table, except for expire_date. If I uncoment the die($expireData), I see the correct value, something like 2012-01-13 13:21:12. The filed is defined as: `expire_date` datetime NULL Any thaughts? edit: the solution is: $expireDate = date("Y-m-d H:i:s", strtotime("+" . $expireMonth . " months")); check out the "s" in my strtotime expression

    Read the article

  • Need help INSERT record(s) MySQL DB

    - by JM4
    I have an online form which collects member(s) information and stores it into a very long MySQL database. We allow up to 16 members to enroll at a single time and originally structured the DB to allow such. For example: If 1 Member enrolls, his personal information (first name, last name, address, phone, email) are stored on a single row. If 15 Members enroll (all at once), their personal information are stored in the same single row. The row has information housing columns for all 'possible' inputs. I am trying to consolidate this code and having every nth member that enrolls put onto a new record within the database. I have seen sugestions before for inserting multiple records as such: INSERT INTO tablename VALUES (('$f1name', '$f1address', '$f1phone'), ('$f2name', '$f2address', '$f2phone')... The issue with this is two fold: I do not know how many records are being enrolled from person to person so the only way to make the statement above is to use a loop The information collected from the forms is NOT a single array so I can't loop through one array and have it parse out. My information is collected as individual input fields like such: Member1FirstName, Member1LastName, Member1Phone, Member2Firstname, Member2LastName, Member2Phone... and so on Is it possible to store information in separate rows WITHOUT using a loop (and therefore having to go back and completely restructure my form field names and such (which can't happen due to the way the validation rules are built.)

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >